at UC San Diego
When we open our eyes, we don’t see “a blooming, buzzing confusion:” we see tables, chairs, computers, books, and cups. How do we learn to connect incoming patterns of light with our knowledge about all of these objects and the categories they belong to? In other words, how do we learn to derive visual meaning?
These fundamental questions drive research in the Visual Learning Lab at UC San Diego. We take an ecological approach to this problem by focusing on how learning occurs in everyday, naturalistic contexts. Our work leverages innovations in machine learning to help us both analyze large datasets (e.g., videos taken from the infant perspective or digital drawings made by children, see right) and to construct models of how learning unfolds over time. Learn more on the research page or read recent work.
​
And click here for more information on joining the lab!
Example annotated children's drawings; each color represents a different object part. From Long et al. (2024) Nature Communications.
Schematic of the BabyView camera and field-of-view (Figure 1, Long et al., 2023 Behav Research Methods).
Example annotated children's drawings; each color represents a different object part. From Long et al. (2024) Nature Communications.
News & Events
​​
-
Welcome to new team members!
-
AJ Haskins joined the lab as a postdoctoral fellow this fall. Welcome AJ!!
-
Tarun Sepuri joined the lab as a laboratory coordinator this fall. Welcome Tarun!!
-
Jane Yang joined the lab as the first graduate student! Welcome Jane!!!
-
-
Recent events
-
Talk at the Vision Sciences Society: Developmental changes in the precision of visual concept knowledge
- Invited talk on June 17th the EgoVis workshop @ CVPR 2024 ​​
-
Poster at Cognitive Developmental Society on individual variation in children's drawings!​
-
Talk on March 21st at the Cognitive Developmental Society CogDev & AI pre-conference!
-