The fields of neuroscience and AI have a long and intertwined history. From the study of simple and complex cells in visual areas of the brain to the recent success of convolution neural networks in many real-world applications, experimental and theoretical neuroscience has contributed significantly to designing smarter machines. In turn, AI models help us better understand brain computations that underlie biological intelligence.
In my talk, I will present several efforts to decipher brain function by building computational models and quantifying model behaviors with human benchmarks in visual attention and object recognition.
First, I will talk about anticipating where humans attend to in the next few seconds on first-person videos. Second, I will describe mechanisms of invariant and zero-shot visual search. Third, I will describe a model of how contextual reasoning can help during visual recognition. We systematically investigated critical properties of where, when, and how context modulates recognition in humans and machines.
In the end, I will propose future research directions towards bridging the gaps between artificial and biological intelligence.
Bio
Mengmi Zhang is a research scientist and principal investigator in Agency for Science, Technology and Research (A*STAR), Singapore. Prior to this, Dr. Zhang was a postdoc with Gabriel Kreiman at Harvard Medical School from 2019-2021. She obtained her PhD at the National University of Singapore (2015-2019) and was a visiting graduate student in KreimanLab at Harvard Medical School (2017-2018). Her research background is at the intersection of artificial intelligence and computational neuroscience. She has made contributions to understanding gaze anticipation, zero-shot visual search, contextual reasoning and continual learning.