In recent years, the field of artificial intelligence (AI) has garnered significant public attention owing to remarkable advancements in research, exemplified by prominent examples like ChatGPT. While these advancements offer tremendous potential for societal transformation, they also introduce tangible risks, particularly in terms of safety and trustworthiness. Indeed, central to AI systems are machine learning (ML) algorithms, which learn from extensive datasets by fitting statistical patterns. However, this framework is susceptible to spurious correlations within the data and inherent simplicity biases in the algorithms, prioritizing accuracy over robustness. Consequently, vulnerabilities arise, including susceptibility to adversarial examples, difficulties in handling distributional shifts between training data and real-world environments, and the perpetuation of biases and stereotypes present in the training data.
This talk will delve into the issue of adversarial examples, whereby seemingly inconsequential alterations to input data, like a small graffiti on a road sign, can deceive the computer vision system of an autonomous vehicle, resulting in a misinterpretation of the speed limit. While humans effortlessly discern the intended speed limit, machine learning algorithms struggle under such circumstances. By presenting theoretically-grounded findings, my objective is to illuminate the fundamental limitations of the robustness of ML algorithms and, subsequently, more intricate AI systems, in the face of these adversarial scenarios. These limitations stem from the complex high-dimensional geometry intrinsic to the nature of the data itself, coupled with the optimization procedures employed.
Dr. Elvis Dohmatob is a senior researcher in machine learning (ML) at Meta’s AI lab, Paris, where he works on theoretical and algorithmic aspects of ML, with a particular focus on questions around safety and trustworthiness of ML systems, like robustness, algorithmic fairness, etc. Before joining Meta in 2021, he occupied a similar position at Criteo for 3 years. My Phd (Computer Science) obtained in 2017 from the University of Paris-Saclay under the supervision of Bertrand Thirion and Gael Varoquaux, explored questions at the intersection of machine learning, neuroscience, and signal processing. This was followed by a postdoc position in the same lab and under the supervision of Bertrand Thirion, after which he moved to industry in 2018 to begin a career in fundamental ML research.