
Example annotated children's drawings; each color represents a different object part. From Long et al. (2024) Nature Communications.

Schematic of the BabyView camera and field-of-view (Figure 1, Long et al., 2023 Behav Research Methods).

Example annotated children's drawings; each color represents a different object part. From Long et al. (2024) Nature Communications.
Welcome to the Visual Learning Lab at UC San Diego.
We are a group of scientists broadly interested in the development of perception and cognition. For example, when we open our eyes, we don’t see “a blooming, buzzing confusion:” we see tables, chairs, computers, books, and cups. How do we learn to connect incoming patterns of light with our knowledge about all of these objects, their verbal labels, and the categories they belong to? How do we learn to derive visual meaning?
These questions—among others—drive our research.
We take an ecological approach throughout our work by focusing on how learning occurs in everyday, naturalistic contexts. Our work leverages innovations in machine learning to help us both analyze large datasets (e.g., videos taken from the infant perspective or digital drawings made by children, see right) and to construct models of how learning unfolds over time. Learn more on the research page or read recent work. And click here for more information on joining the lab!

News & Events
​​
-
Welcome to new team members!
-
Haoyu Du joined the lab in Fall 2025 !! We are so excited to have her on our team!
-
-
Recent events​
- The lab had FOUR upcoming posters at CCN 2025! See the papers in the Publications section!