Friday, October 13
9:30 AM - 4:00 PM EDT
Skylight Room and via Zoom
watch the lectures online
9:30 AM
Coffee and bagels
—
10:00 AM - 11:30 AM
Dense associative memory for novel transformer architectures
Dmitry Krotov
MIT-IBM Watson AI Lab and IBM Research
—
11:30 AM - 12:00 PM
Break
—
12:00 PM - 1:30 PM
Toward a brain-inspired model of the flexibility and efficiency of human cognition
Jonathan Cohen
Princeton Neuroscience Institute
Modern AI is generally built around symbolic architectures or neural networks. Symbolic systems are flexible, but can be difficult to configure and inefficient to execute for complex problems. Neural networks can be trained to efficiently execute complex functions, but require massive amounts of data to do so, and cannot be re-applied broadly in new domains sharing underlying structure without considerable retraining. In contrast, the human brain achieves both in a single computational architecture. It can carry out symbolic processing, identifying fundamental regularities and flexibly generalizing abstract structure across domains of processing, and at the same time learn complex representations and functions in specific domains and compute these with remarkable efficiency. In this talk, I will highlight ongoing work at the intersection of cognitive neuroscience and machine learning that is revealing fundamental principles about how subsystems in neural architectures — both in the brain and artificial systems – can interact to achieve the unique combination of flexibility and efficiency characteristics of human cognition.
—
1:30 PM - 2:30 PM
Lunch
—
2:30 PM - 4:00 PM
Stochastic Interpolant:
A unified framework for generative modeling with flows and diffusions
Eric Vanden-Eijnden
Courant Institute, New York University
Organizers: Kamesh Krishnamurthy (Princeton); Francesca Mignacco (CUNY)
Sponsored in part by the NSF supported Center for the Physics of Biological Function, a joint effort of The CUNY Graduate Center and Princeton University