
Example annotated children's drawings; each color represents a different object part. From Long et al. (2024) Nature Communications.

Schematic of the BabyView camera and field-of-view (Figure 1, Long et al., 2023 Behav Research Methods).

Example annotated children's drawings; each color represents a different object part. From Long et al. (2024) Nature Communications.
Welcome to the Visual Learning Lab at UC San Diego.
We are a group of scientists broadly interested in the development of perception and cognition. For example, when we open our eyes, we don’t see “a blooming, buzzing confusion:” we see tables, chairs, computers, books, and cups. How do we learn to connect incoming patterns of light with our knowledge about all of these objects, their verbal labels, and the categories they belong to? How do we learn to derive visual meaning?
These questions—among others—drive our research.
We take an ecological approach throughout our work by focusing on how learning occurs in everyday, naturalistic contexts. Our work leverages innovations in machine learning to help us both analyze large datasets (e.g., videos taken from the infant perspective or digital drawings made by children, see right) and to construct models of how learning unfolds over time. Learn more on the research page or read recent work. And click here for more information on joining the lab!

Recent news!
-
New preprint on how young children see objects!
-
Jane Yang is giving a talk on this work at VSS THIS SUNDAY! Session here!
-
This fall, Lynna Tran will be joining as our lab coordinator, and Ellie Breitfeld will be joining as a postdoctoral fellow! We're so excited to have them join the lab!
-
We had three posters at CDS 2026 (see right) -- email authors for details!
-

