Skip to main content

Advancing AI's Image Recognition

May 22, 2024
|

A diagram showing a two-step method for image segmentation. On the left, there are two images: a support image and a query image, along with their foreground masks. The main section illustrates the process: in the first step (red background), features from the support image are matched with the query image to make an initial prediction. The second step (green background) refines this prediction to improve accuracy. Heatmaps show areas of high correlation between the images at different stages. The dense decoder helps to refine the final prediction. Image segmentation allows AI to understand the contents of an image in great detail, identifying and labeling each pixel.
A man with short, dark hair and a beard, smiling Charalambos Poullis

Imagine if self-driving cars could quickly learn to recognize new road signs or if medical imaging systems could adapt to identify new abnormalities with just a few examples.

Researchers at Concordia University are making this possible. In a study published in Nature’s Scientific Reports, PhD student Amin Karimi and Professor Charalambos Poullis from the Gina Cody School of Engineering and Computer Science introduce a new method for improving how artificial intelligence (AI) recognizes images, even with very few examples to learn from.

This method, known as "few-shot semantic segmentation," allows AI to understand the contents of an image in great detail, identifying and labeling each pixel. The “few-shot” part means that the AI can learn to recognize new objects with only a few labeled examples, much like how humans can learn.

Karimi and Poullis developed a unique approach that combines information from two types of AI models: one that classifies whole images and another that breaks down images into their component parts. By integrating the strengths of these models, the researchers created a more powerful system for understanding images.

A man with dark hair and a trimmed beard, wearing a white shirt, looking into the camera with a slight smile in an indoor setting with a modern design. Amin Karimi

Their technique involves a special kind of learning called "transductive meta-learning," which helps the AI improve its performance by learning in two stages.

First, the AI learns to identify patterns within the labeled data it’s given, focusing on similarities between known objects.

Then, it uses this knowledge to make educated guesses about new, unlabeled images, improving its accuracy by reducing mistakes.

When tested on standard image datasets, Karimi and Poullis' method achieved outstanding results, even with a relatively small number of adjustable settings (just 2.98 million parameters). For context, many advanced image recognition models have tens to hundreds of millions of parameters.

This research opens up new possibilities for real-world applications where gathering large amounts of data is challenging and expensive. It represents a significant step towards creating AI systems that are more adaptable, efficient, and accurate, similar to human learning abilities.

Professor Charalambos Poullis leads the Immersive and Creative Technologies Lab in the Department of Computer Science and Software Engineering at Concordia University.



Back to top

© Concordia University