'Hey Siri, are robots inherently biased?'
Concordia’s long-time expertise in machine intelligence has helped Montreal achieve its current status as a global AI hub. This year the university is celebrating the 30th anniversary of its Centre for Pattern Recognition and Machine Intelligence (CENPARMI).
To mark the occasion, CENPARMI is hosting the inaugural International Conference on Pattern Recognition and Artificial Intelligence at Concordia from May 13 to 17.
The five-day event kicks off with a free public lecture on AI, pattern recognition, natural language processing and machine learning (including deep learning).
Sabine Bergler, professor in the Department of Computer Science and Software Engineering, is one of the conference co-chairs.
Last fall, as part of Concordia’s 2017 Homecoming celebrations, Bergler and Anne Martel (BFA 09), co-founder and senior vice-president of operations for Element AI, sat down to discuss artificial intelligence (AI) and what we can expect to see in the coming months and years.
In advance of the upcoming conference, Bergler and Martel agreed to once again share some of their perspectives on AI, from its legal and economic implications to why Siri just can’t understand your question.
‘We need to be critically engaged in how we let technologies impact our lives’
First off, what is artificial intelligence?
Sabine Bergler: When you speak of AI you must speak of the impossible. Once something in the realm of AI becomes functional, we give it another term, like robotics or machine learning. AI has shattered into many subfields over the years, with the most recent spin-off technology being deep learning.
A concern many have with AI is that we'll be put out of work by machines. What impact will its development have on the economy?
Anne Martel: I get people telling me that AI will affect jobs negatively all the time. It’s a $15-trillion market worldwide. It isn’t going to eliminate jobs, but it will change the jobs that are available. And the government is going to need to mitigate any impacts.
Simply put, as new tools are introduced to the market, certain skills will no longer be in high demand but other things will emerge in their place.
It's not likely that this will be a cookie-cutter process across industries. It will have to be addressed at an individual level.
Should we be worried about bias in the way machines learn?
SB: It's important to note that bias has different meanings. Bias is value neutral in machine learning but in popular culture we understand it as negative. A machine will only learn what you input. Even deep learning needs guidance.
If the data input isn't controlled, or if a large set of the data over-represents certain thoughts or opinions, the result will tend in that direction, which we call bias whether it's good or bad.
AM: We saw what Sabine mentions with Tay, an AI chatbot that was released by Microsoft. People fed it with an overwhelming amount of hate speech and it began to produce pro-Nazi comments as tweets. There was no filter in place to catch these comments.
To avoid these kinds of things we need to build in safety nets. We need to make sure we are reverse-engineering to find the bias in the data that is used in the deep learning of machines.
What are the legal implications of AI as we are seeing it now?
AM: Existing legal issues are amplified by new AI technologies. The issue is not that we need laws surrounding something like self-driving vehicles, it is how easy it could be to hack into these vehicles.
Hacking technology is an issue that has been around for decades. The introduction of new AI just provides a different lens to view its legal implications.
SB: Every technological project brings opportunities to abuse it. The real concern is when we begin abdicating to machines and become complacent in our reliance on them. We need to be critically engaged in deciding how we let these technologies impact our lives. And we need to understand what we are giving up or losing in the process.
Machines are winning at Jeopardy and chess, but Siri still has trouble understanding simple questions. What are some of the technical victories and challenges in improving AI?
SB: Our expectations of Siri are unreasonable. People think that machines will do everything without human interaction. But the machine needs to be exposed to as much data as possible to “know” the answer to a question.
It takes time and several iterations to provide the data needed to get there. In the case of Siri, the internet can stand in for only some of that.
AM: This is our biggest challenge at Element AI. Companies expect that we will come in and build a solution that will revolutionize their business overnight. But really it is about building the solution that addresses their needs and uses their data to build it. It's using what’s available to make the right decision.
What is the next big breakthrough that we can expect in AI?
SB: I would like to see systems develop more explanation facilities so that applications can expand. For more elegant tools to be developed we need to be able to communicate the thought process of the system to the user so that they can better understand and trust the decisions that are being made.
AM: I think we will see the rise of personal adaptability in these technologies, where the user can influence the learning of the machine to make the technology more pertinent. This would make it so that tools like Siri can adapt to your particular need for information and your preferences in how you get it.
The free lecture at the International Conference on Pattern Recognition and Artificial Intelligence is on May 13 from 2 to 4 p.m. in the Sir George Williams University Alumni Auditorium (H-110). The conference runs May 13 to 17. Registration is now open.