Skip to main content

Concordia professor uses machine learning to improve software security

Yann-Gaël Guéhéneuc wants to create a tool that allows developers to check their code for security risks
April 15, 2019
Yann-Gaël Guéhéneuc: “This could be something that developers embed in their toolbox.”
Yann-Gaël Guéhéneuc: “This could be something that developers embed in their toolbox.”

Software is a huge part of our world.

From flying a plane to paying taxes to simply turning on a computer in the morning — there are very few activities that don’t involve software in some way.

But poor software code can damage computer systems, or worse, expose its users to data breaches from malicious actors.

Yann-Gaël Guéhéneuc, professor in the Department of Computer Science and Software Engineering wants to reduce that risk.

Using data from various software systems, he’s teaching machine learning algorithms to develop their own rules for software quality — what’s acceptable and what might represent a security risk to the user.

The goal is to one day have a tool that software developers can use to check over their code when they’ve finished writing it — sort of like spell check in Microsoft Word.

Yann-Gaël Guéhéneuc, professor in the Department of Computer Science and Software Engineering. Yann-Gaël Guéhéneuc, professor in the Department of Computer Science and Software Engineering.

Montreal’s the place to be right now for AI research

How are you using artificial intelligence (AI) in your research?

Yann-Gaël Guéhéneuc: We’re researching how to improve software quality, and one of the problems with software quality is we have to have clear, strict rules to enforce it. But we cannot describe very explicit rules to measure quality because there are too many factors to take into account.

Quality control for software is complicated, as everything is in bits and pieces in your computer, and software is written by people. If you are a novice, quality will be different.

One of the ways to address that is machine learning. We take software systems, we measure a lot of things and then we give all this data to the machine learning algorithms that will infer rules by themselves.

Right now the algorithm generates a list of pieces of software, saying “this piece is pretty good” and “this piece is pretty bad.” Then we can take a deeper look.

Typically we’re looking at software security, because security is a big concern. Eventually the algorithm should be able to tell us, “This piece of code is unsafe. It’s not secure. You have to rewrite, redesign or modify it to make it more secure.”

How did you get interested in this line of work?

YGG: I’ve been interested in software quality for a long time because a lot of cost goes into it. It’s one of those things where nobody wants to pay for it, but everybody wants it.

As users of Windows or Macintosh, we want things to go fast, to be secure, to be easy to use, but actually many of those things are in opposition. It’s hard to have a very fast and very secure system.

That’s what motivated me to follow this line of questioning — can we actually find ways to quantify quality?

How could your research be applied to make improvements in our daily lives?

YGG: One eventual possibility — we’re not there yet — is to have more secure software systems thanks to rules that have been inferred by machine learning. Developers are good, and there’s so much they can do, but with machine learning we may actually be able to help them.

If you’ve got a new system with a new system code, but here and there you have security issues, machine learning can improve the quality of the systems.

This tool could be something that developers embed in their toolbox. After developers write their code, it could tell them how to improve the security, how to improve the overall software quality. It could be very similar to what you have in Microsoft Word, when you write a piece of text and you get the little red line to highlight typos.

How do you see advances in AI beginning to change the way we do things?

YGG: There are many examples. We’re seeing more and more AI-controlled traffic lights and even traffic flow, with adjustments near road blocks, etc. It’s also happening in medicine, with more and more AI being used to help doctors with treatments.

On the user end, it’s unfortunately sometimes being used to control or to target specific groups — that’s a big debate right now.

What role do you see for Montreal in the future of AI development?

YGG: Montreal’s the place to be right now for AI research. There are so many researchers, it’s like a huge critical mass working in AI in the city. We have the group at Gina Cody School of course, and there are groups at McGill University, Université de Montreal and Université du Québec à Montréal (UQAM).

And as a city, Montreal is very into using AI, so it’s a good combination.

Find out more about artificial intelligence research at Concordia.



Back to top

© Concordia University