A common fear of AI is that it will make certain jobs obsolete. In the health industry, that will likely be less of a concern.
“It should be a tool that is used by people and not something that replaces people,” says Kersten-Oertel. “It is important to always have a human in the loop.”
Whether it’s AI-guided imaging or surgery, it’s not a matter of making health-care practitioners obsolete, but easing people’s workloads and providing tools to help them in their daily clinical tasks, she adds.
“They all revolve around developing tools to help clinicians or patients. They can be used to train the next generation of radiologists, clinicians and surgeons.”
Job loss may not be a major issue, but there are others. The data AI requires is filled with personal information — names, medical histories and more. This triggers concerns over privacy.
Initiatives like Bacon’s and Gambhir’s would require consent to opt in, of course, but that’s not to say there aren’t risks.
“As we become more sophisticated in terms of developing programs and algorithms, in parallel there are unscrupulous people out there who are developing viruses and hacks,” notes Bacon.
Ethical challenges will require monitoring. The high cost to develop and deploy AI can privilege accessibility to people in wealthier nations, neglecting lower-income countries.
“One of the things we have to think about is how we can make AI accessible to everyone,” says Kersten-Oertel. “How can we create these tools and technologies so that they’re benefiting the whole world?”
Societal imbalances can also find their way into AI itself. Data may seem neutral, but it’s not invulnerable to institutional biases. If the data it’s given to learn isn’t inclusive and representative of all kinds of people, the AI will internalize the biases we see more broadly in our society. Hospitals in affluent areas have more resources, which means patient outcomes are good.
Hospitals in more disadvantaged places will not have the same resources and may see worse outcomes. If data from the first hospital is fed to an AI, it will be misrepresentative of experiences at the second hospital.
For example, research has shown that women of colour, and particularly Black women, are often diagnosed with breast cancer later than white women. That means AIs that have been fed non-inclusive mammogram data could fail to account for that crucial disparity.
“We have to be aware of these existing biases and do our best to make sure that we’re not propagating them,” says Kersten-Oertel.
Bacon expresses a similar imperative for AI to produce broad, inclusive outcomes in health care. “We want people to be healthier and happier. We want them to live longer, fuller lives. I know it sounds very grandiose, but that’s what drives us. It motivates us. That’s really what’s at the core of all of this work.”