Skip to main content

Concordia researcher explores how to make AI ‘more intelligent’

Simone Brugiapaglia: ‘Despite the tremendous success of deep learning in countless applications, the mathematics of it is still in its infancy’
April 1, 2022
A smiling man with short, dark curly hair, a beard, an earring, a blue T-Shirt and a black jacket.
Simone Brugiapaglia: “I am looking forward to seeing how the Canadian mathematical community will receive our work.”

Artificial intelligence (AI) has become an omnipresent part of our day-to-day lives, and more so through the COVID-19 pandemic.

While many people have little concern with trading aspects of their private lives to AI systems in exchange for convenience, what happens when these systems get things wrong?

Simone Brugiapaglia, an assistant professor of mathematics and statistics in the Faculty of Arts and Science, recently co-authored a paper on this very question. “Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data,” to appear in the Proceedings of Machine Learning Research, examines how to make AI “more intelligent.”

'There are plenty of important research questions waiting for an answer’

Many people associate deep learning with high-level scientific work but might not realize how much they use it in their everyday lives. What are some ways most people are using deep learning technology?

Simone Brugiapaglia: One of the most impressive features of deep learning is its extreme versatility. For example, deep learning is used to perform speech synthesis in Apple’s Siri and speech recognition in the conversational engine of Amazon’s Alexa. Another popular application of deep learning that we use very often (depending on how much TV you watch) is the recommender system that Netflix employs to suggest shows we are likely to enjoy. Deep learning is also an essential component of the computer vision system behind Tesla’s autopilot.

Tell us about your study

SB: Many mathematical results on deep learning take the form of existence theorems: they assert the existence of neural networks able to approximate a given class of functions up to a desired accuracy. However, most of these results do not address the ability of training such networks nor do they quantify the amount of data needed to reliably do this.

In our paper, Ben Adcock, Nick Dexter, Sebastian Moraga (all based at Simon Fraser University) and I address these issues by proving so-called practical existence theorems. Besides showing the existence of neural networks with certain desirable approximation properties, our results provide conditions on the training procedure and the amount of data sufficient to achieve a certain accuracy.

What attracts you to this topic?

SB: Despite the tremendous success of deep learning in countless applications, the mathematics of it is still in its infancy. From the applied mathematician’s perspective, this is exciting since there are plenty of important research questions waiting for an answer.

Another fascinating aspect of the mathematics of deep learning is its high level of interdisciplinarity. For example, to obtain the practical existence theorems of our paper, my collaborators and I combine elements from approximation theory, high-dimensional probability and compressive sensing.

Finally, I’m greatly motivated by thinking of how new theoretical insights can inform deep learning practitioners, leading them to deploying more reliable algorithms in the real world.

Finally, what’s next?

A huge project that I have just completed in collaboration with Adcock and Clayton Webster (University of Texas) is the book Sparse Polynomial Approximation of High-Dimensional Functions. It was just published by the Society for Industrial and Applied Mathematics (SIAM).

Our book illustrates the theoretical foundations of sparse high-dimensional approximation that were able to make our practical existence theorems possible. The book has a final chapter entirely devoted to open problems in the field and it will constitute the basis for exciting new research over the next years. I will also teach a mini-course based on the book at the upcoming summer meeting of the Canadian Mathematical Society. I am looking forward to seeing how the Canadian mathematical community will receive our work.


Read the cited paper: “Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data.”

Learn more about Concordia’s Department of Mathematics and Statistics.

 



Trending

Back to top

© Concordia University