Skip to main content


Conferences & lectures

AI and Human Rights Forum

An open dialogue with leading Artificial Intelligence and human rights experts

Date & time

Wednesday, April 15, 2020
9:30 a.m. – 4:30 p.m.




Marie Lamensch


J.W. McConnell Building
1400 De Maisonneuve Blvd. W.
J.A. De Sève Cinema

Wheelchair accessible


In 2019, the Montreal Institute for Genocide and Human Rights Studies (MIGS) at Concordia launched the Human Rights and Artificial Intelligence Forum. The event convened thought leaders, government officials and experts in technology and human rights to discuss the implications of new technology for global affairs.

In 2020, MIGS is scaling up the event in collaboration with Element AI and the Embassy of the Kingdom of the Netherlands in Ottawa, under the patronage of the Canadian Commission for UNESCO. This year's forum will bring together non-governmental organizations, tech companies, foundations and governments around the globe that understand the need to collaborate on the future of artificial intelligence (AI) and human rights.

Panel discussions will cover such topics as disinformation, online hate, ethics, AI governance, the United Nations (UN) and global cooperation. The forum will serve as an incubator for forming new partnerships between academics, civil society, the UN and the private sector.

Speakers will be announced soon.

Registration is mandatory.

Draft agenda

1. Internet governance and AI: ensuring space for human rights

Over the past few years, the issue of AI and human rights has come under increasing scrutiny of international bodies, civil society organizations and tech experts. While some use AI for social good, there is already evidence that these tools can also be weaponized to commit human rights abuses.

As we grapple with the societal and human impact of AI systems, the UN, governments, and researchers have started to debate the importance of AI governance for the protection of rights and freedoms. Can international human rights help govern AI research and application? How can stakeholders work together to safeguard against the abuse of AI systems?

2. The fight against online hate and extremism: is AI the solution?

The internet has opened the space for more freedom of expression to citizens around the world, but online spaces have also opened the door for more hate speech online. Domestic and foreign extremists can disseminate their hateful propaganda and connect with other extremists. In the past few years, there has been more pressure on tech giants such as Facebook, Youtube and Twitter to address this important issue.

In response, these firms have increased their use of AI programming to identify and remove online hate and extremism. How effective are these AI programs? While using AI solutions to identify and counter online hate can be effective in several cases, there also examples of misidentification and bias. Ending online hate and extremism will, therefore, require the collaboration of the UN, governments, tech companies and civil society.

How can these stakeholders effectively use AI in conjunction with humans to accurately fight online extremism?

3. Misinformation and AI: friends or foes?

Tech giants and governments around the world are struggling to deal with disinformation, especially as it has become clear that disinformation generated and amplified by AI are making this complex problem bigger and more dangerous. From deepfakes to MADCOMS, tech experts have warned that this technology will make the distribution of misinformation more efficient, invasive and personalized.

At the same time, AI tools are also creating new methods to fight misinformation by detecting deepfakes and fact-checking messages. This panel will explore advances in AI and possible missuses for misinformation.

4. Understanding surveillance and digital authoritarianism

AI-powered surveillance systems and facial recognition technologies have many worried about privacy, including democratic countries such as the United States. Meanwhile, authoritarian regimes around the world are rapidly becoming digital authoritarian states that use technology to survey, repress and manipulate populations at home and abroad

As these regimes export their technologies abroad, the power balance between autocracies and democracies is changing. Canada and like-minded countries need to commit to stronger responses to defend the fundamental rights and freedoms of citizens.

5. AI and the UN's Sustainable Development Goals

The United Nations efforts to achieve the Sustainable Development Goals (SDGs) by 2030 are ambitious. As AI technologies start to change our societies, what are the benefits and challenges for the SDGs? While AI can help achieve some of these targets, could they also inhibit others? This panel will explore the impact AI for the achievement of the SDGs and the regulatory oversights needed to guarantee sustainable development.

6. The ethics of AI: how can we ensure democratic governance?

AI’s potential to uphold democracy and the rule of law makes it crucial to ensure its ethical development. As we question the impact, governance, ethics and accountability of these technologies, how can we harness the potential of AI for democratic values and equality? Could the international human rights law framework be the best approach to governing AI?

Back to top Back to top

© Concordia University