-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
24 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,34 @@ | ||
<center><h1>Are Vision Transformers More Data Hungry Than Newborn Visual Systems?</h1></center> | ||
<h1 align="center">Are Vision Transformers More Data Hungry Than Newborn Visual Systems?</h1> | ||
|
||
### Accepted Conference: NeurIPS 2023 | ||
Lalit Pandey, Samantha M. W. Wood, Justin N. Wood | ||
|
||
<img src='./media/main.png'> | ||
|
||
|
||
## Abstract | ||
Vision transformers (ViTs) are top-performing models on many computer vision benchmarks and can accurately predict human behavior on object recognition tasks. However, researchers question the value of using ViTs as models of biological learning because ViTs are thought to be more “data hungry” than brains, with ViTs requiring more training data to reach similar levels of performance. To test this assumption, we directly compared the learning abilities of ViTs and animals, by performing parallel controlled-rearing experiments on ViTs and newborn chicks. We first raised chicks in impoverished visual environments containing a single object, then simulated the training data available in those environments by building virtual animal chambers in a video game engine. We recorded the first-person images acquired by agents moving through the virtual chambers and used those images to train self-supervised ViTs that leverage time as a teaching signal, akin to biological visual systems. When ViTs were trained “through the eyes” of newborn chicks, the ViTs solved the same view-invariant object recognition tasks as the chicks. Thus, ViTs were not more data hungry than newborn visual systems: both learned view-invariant object representations in impoverished visual environments. The flexible and generic attention-based learning mechanism in ViTs—combined with the embodied data streams available to newborn animals—appears sufficient to drive the development of animal-like object recognition. | ||
|
||
## Code Base Organization | ||
|
||
|
||
## Environment Set Up | ||
|
||
|
||
## Model Training | ||
|
||
|
||
## Model Testing | ||
|
||
|
||
## Model Visualization | ||
|
||
|
||
## Plot Results | ||
|
||
|
||
## Contributors | ||
|
||
|
||
#### Notes: | ||
|