Skip to main content

Concordians at the heart of digital health

How AI-focused research is transforming the future of medicine
November 23, 2021
By Alexander Huls

Whether it’s facial-recognition technology, self-driving cars or Alexa fielding Spotify requests, artificial intelligence (AI) is transforming everyday life.

Consumer applications tend to get the spotlight, but AI is also poised to have a revolutionary impact on our health. Research and development is pushing its way into surgery, diagnosis, patient monitoring, elder care and more.

This, say Concordia researchers and alumni in the field, will lead to improvements in health and even the possibility of extending — and saving — lives.

Why machine learning and health?

Mojtaba Hasannezhad: “AI is a tool — it’s up to human researchers to find the applications.”

By 2027, the health-care AI market is expected to hit $51.3 billion USD, according to a 2020 report published by Meticulous Research. This will be driven in part by the increasing accessibility and rapid growth of heavy computing power, which enables the high demands of a common form of health-care AI: machine learning.

As the name implies, machine learning is a form of AI that echoes how humans absorb and process information. The AI is fed large amounts of data and, using complex algorithms, begins to not just learn from it, but know what to do with it. It can recognize patterns, offer predictions, categorize and detect anomalies in ways — and at volumes — that humans can’t. What’s more, as it does all of that, it continues to learn, refining and improving its abilities.

What’s driving a wave of AI health innovations isn’t just the increasing sophistication of AI technology, however. It’s also human ingenuity. “AI is a tool — it’s up to human researchers to find the applications,” says Mojtaba Hasannezhad, a Concordia PhD candidate in electrical and computer engineering working on AI-based assistants to help the elderly. “We have to be able to come up with the ideas of where we can use these amazing and advanced technologies.”

Can AI influence healthy habits?

Simon Bacon, Professor in the Department of Health, Kinesiology and Applied Physiology

Scrolling through Apple’s or Google’s apps stores reveals a number of fitness apps that promise to improve our health. After downloading and trying the app, many users become annoyed with notifications or ignore them altogether — 96 per cent stop using fitness apps after only 30 days.

Simon Bacon, a professor in the Department of Health, Kinesiology and Applied Physiology, is looking to apply AI to the problem and help apps better learn when and how to engage us to take care of our daily health needs.

“One of the key things in e-health is trying to understand how we get behaviour to change,” says Bacon, who also serves as the FRSQ Research Chair in Artificial Intelligence and Digital Health for Behaviour Change and the Canadian Institutes of Health Research Strategy for Patient-Oriented Research Mentorship Chair in Innovative Clinical Trials. Bacon believes that identifying ambivalence is a key way to do it.

When a fitness app prompts someone to exercise or a calorie tracker reminds them to calculate that BLT they had for lunch, it can generate a strong reaction — especially if they’re unenthusiastic about the notification.

“People express ambivalence in a variety of different ways,” Bacon says. “Most of it is quite subtle.”

It could be a shrug of the shoulders, he notes, a furrowing of the brow, a muttered remark under the breath, or a twitch of the mouth. Bacon’s goal is to train an AI algorithm to be sophisticated enough to be able to identify those gestures by using something we all have.

“The fact that most digital devices have cameras and microphones provides us with a great opportunity to be able to measure ambivalence within an e-health application,” he says.

When the AI is given access to our phones, it can study our faces to identify ambivalence in order to do a better job of engaging us to work out or eat healthier. “The idea is to try to understand the person in front of the screen,” says Bacon. “So instead of giving them a standard, prescribed ‘do this, do that,’ it adapts to how the user is feeling at the time.”

Bacon’s isn’t the only AI using facial recognition within the health industry. Recent years have seen the technology used to check in patients, diagnose diseases and identify signs of mental-health distress. All share a common goal, however. If eyes are windows to the soul, these AI devices want to use our faces to open the door to better health.

The power of chatbots and virtual assistants

Dinesh Gambhir, BEng 83, CEO, First Outcomes

Chatbots have already pervaded our lives in many ways. They’re ready to provide assistance on a customer support page when the internet goes out or ask whether help is needed when checking out products on a company’s webpage. Now they’re also making an impact in the health space.

First Outcomes, founded by CEO Dinesh Gambhir, BEng 83, is one company that’s mobilizing sophisticated AI chatbots to engage patients with health monitoring, procedure follow-ups and as-needed assistance.

For example, after an outpatient surgery, usually a doctor or nurse has to take time to place a phone call and check in on a patient’s progress. First Outcomes’ bots can do that instead. “The robots can reach out and ask the patient, ‘How are you doing?’” says Gambhir. If the answer is positive, the AI checks again at a pre-assigned time. Otherwise, “the bot can do a hard transfer to a triage nurse so a human can decide what to do next.”

Gambhir’s AI isn’t limited to aftercare. It’s also being deployed to assist people with diabetes by monitoring glucose levels and can keep track of how medication changes affect health. The goal is not just to ensure optimal care but to free up health practitioners to focus on more of what they trained to do, rather than administrative tasks.

“We train people to be caregivers, whether they are nurses, social workers or providers,” says Gambhir. “My job is to allow them to do the job they were educated for and what they want to do.”

In that way, First Outcomes is part of an incoming wave of AI bots aimed to help clinicians stay closer to their patients in more efficient ways, while providing those patients with greater care.

Mojtaba Hasannezhad’s work is indicative of a variation on First Outcomes’ technology. The Concordia 2021 Public Scholar contributed to a non-invasive device that can be plugged into walls in order to monitor the elderly or the disabled. “If, for instance, someone coughs: What kind of cough is that? Is it a sign of some disease? The device can detect that and classify it,” says Hasannezhad.

The AI-driven tool can monitor vital signs and recognize falls or respiratory distress. It can even learn a person’s daily routines — their physical activity, how long they’re in bed or in the bathroom — in order to identify deviations that could be cause for emergency. In those cases — much like Gambhir’s bots — it becomes about providing patients with better care.

“The same device can communicate with the residents. If something happened to them it can ask them if they are okay,” says Hasannezhad. Or, it can alert caregivers that their patients may need assistance.

Safer, more informative scans

Marta Kersten-Oertel: “How can we create these tools and technologies so that they’re benefiting the whole world?”

One of the most life-changing — and potentially saving — emerging uses of AI is its application in medical imaging, such as CT scans, radiography, ultrasounds and more.

A current struggle in health care is that medical-imaging professionals haven’t been able to keep up with demand. The hundreds of images that CT, MRI and other scans produce can overwhelm practitioners and lead to mistakes. Those mistakes can be harmful, given that imaging devices are often responsible for identifying life-threatening conditions.

Human vision is also limited in what it can detect, leading to missed early signs of something potentially fatal. That’s where AI comes in.

The technology can’t be overworked and can spot what the human eye can’t. Huge amounts of data — made up of medical images that show and do not show traces of diseases — have become available through open-source libraries. These images are fed to an AI that can quickly learn, for example, what a lung-cancer node looks like versus what doesn’t.

“The idea is to make sense of these images and to spot patterns that are not always visible to the human eye,” says Marta Kersten-Oertel, an associate professor in the Department of Computer Science and Software Engineering, whose research includes AI-powered diagnosis of CT scans for strokes.

With that comes the ability to detect cancers, strokes, cognitive disorders and more — and the chance to save lives.

“It has the promise of detecting the cancer in early stages, when the patient still has a high chance of survival,” says Parnian Afshar, a PhD student with the Concordia Institute of Information Systems Engineering (CIISE) who has worked on deep-learning-enabled imaging that has successfully been able to detect brain tumours and lung-cancer nodes. The power of her AI’s detection and diagnostic skills was even adapted to the pandemic, using the technology in chest scans to determine if patients had COVID-19.

This type of technology has already proven to be effective. In 2019, researchers at Google and academic medical centres put a lung-cancer-detecting AI up against six radiologists. The AI spotted signs of cancer with 94 per cent accuracy — better than the humans. In some cases, the AI even noticed indications in early scans that the doctors didn’t. Using AI’s keen “eyes” won’t just be limited to medical imaging, either.

“One of my main research areas is actually image-guided surgery and augmented reality,” explains Kersten-Oertel, who also serves as the Concordia University Research Chair in Applied Perception. “There’s a lot of work being done on using AI to answer questions like, ‘Is it safe to cut here or not?’”

Not without risks

Parnian Afshar: “It has the promise of detecting the cancer in early stages, when the patient still has a high chance of survival.”

A common fear of AI is that it will make certain jobs obsolete. In the health industry, that will likely be less of a concern.

“It should be a tool that is used by people and not something that replaces people,” says Kersten-Oertel. “It is important to always have a human in the loop.”

Whether it’s AI-guided imaging or surgery, it’s not a matter of making health-care practitioners obsolete, but easing people’s workloads and providing tools to help them in their daily clinical tasks, she adds.

“They all revolve around developing tools to help clinicians or patients. They can be used to train the next generation of radiologists, clinicians and surgeons.”

Job loss may not be a major issue, but there are others. The data AI requires is filled with personal information — names, medical histories and more. This triggers concerns over privacy.

Initiatives like Bacon’s and Gambhir’s would require consent to opt in, of course, but that’s not to say there aren’t risks.

“As we become more sophisticated in terms of developing programs and algorithms, in parallel there are unscrupulous people out there who are developing viruses and hacks,” notes Bacon.

Ethical challenges will require monitoring. The high cost to develop and deploy AI can privilege accessibility to people in wealthier nations, neglecting lower-income countries.

“One of the things we have to think about is how we can make AI accessible to everyone,” says Kersten-Oertel. “How can we create these tools and technologies so that they’re benefiting the whole world?”

Societal imbalances can also find their way into AI itself. Data may seem neutral, but it’s not invulnerable to institutional biases. If the data it’s given to learn isn’t inclusive and representative of all kinds of people, the AI will internalize the biases we see more broadly in our society. Hospitals in affluent areas have more resources, which means patient outcomes are good.

Hospitals in more disadvantaged places will not have the same resources and may see worse outcomes. If data from the first hospital is fed to an AI, it will be misrepresentative of experiences at the second hospital.

For example, research has shown that women of colour, and particularly Black women, are often diagnosed with breast cancer later than white women. That means AIs that have been fed non-inclusive mammogram data could fail to account for that crucial disparity.

“We have to be aware of these existing biases and do our best to make sure that we’re not propagating them,” says Kersten-Oertel.

Bacon expresses a similar imperative for AI to produce broad, inclusive outcomes in health care. “We want people to be healthier and happier. We want them to live longer, fuller lives. I know it sounds very grandiose, but that’s what drives us. It motivates us. That’s really what’s at the core of all of this work.”

Back to top

© Concordia University