Skip to main content

Digital Mass Atrocity
Prevention Lab

Social media and the internet enables extremist groups to disseminate propaganda, advertise their crimes, incite violence, radicalize and persuade disenfranchised young adults to join their hateful cause. As the fight against extremism increasingly takes place in cyberspace, what can governments, international organizations, civil society groups and individual citizens do to counter online extremism?

The Digital Mass Atrocity Prevention Lab (DMAP Lab) is a policy hub working to hate speech, combat genocidal ideologies online and work as a counter force against extremists and their ideas.

Our goals and services

  • Research and analyze key actors and drivers of online extremism and radicalization
  • Consultancies
  • Develop tools and strategies to counter extremists who use social media, artificial intelligence and other digital technologies as a weapon of war
  • Propose policy recommendations to governments, NGOs, UN agencies and other stakeholders
  • Provide specialized training and policy advice
  • Bring together policymakers, journalists, academics, tech expects, human rights activists and community leaders to create a global network as a force for good
  • Research how new technologies such as Artificial Intelligence can be use to positive and negative purposes, and map key actors working on AI

Initiatives

  • Designed and delivered  pilot course on social media and public diplomacy for Global Affairs Canada.
  • Mapping the artificial intelligence, networked hate and human rights project
  • Convened a meeting between Montreal’s business and human rights communities for an important discussion on Artificial Intelligence and human rights, featuring Michele Bachelet, the United Nations High Commissioner for Human Rights, Yoshua Bengio of MILA, and Jean-Francois Gagné, CEO of  Element AI.
  • Established a policy research initiative with Global Affairs Canada and Tech Against Terrorism to map out the nexus between online extremism, hate and Artificial Intelligence. Oversaw the drafting of the policy report “Artificial Intelligence, Networked Hate and Human Rights”.
  • Organized two SSHRC sponsodered events: “Rwanda and Beyond: Media and Mass Atrocities” in partnership with CIGI and Carleton University, and “Global Diplomacy in the Digital Age: Decoding How Technology is Transforming International Relations."
  • Established a partnership between Concordia University and Facebook and trained over 20 students on AI and digital technologies who then participated in the Facebook Global Digital Challenge 
  • Participated in the Quebec-Unesco conference on the Internet, Youth and Radicalization.
  • Organized and hosted the Global Diplomacy Lab in Montreal with over 60 young global leaders gathered to address the theme “Decoding Global Diplomacy: Balancing Power through Information Technology”
  • Co-organized the conference “Global Diplomacy in the Digital Age: Decoding how Technology is Transforming International Relations."
  • Created the Global Humanitarian Twitterati to highlight the world’s top human rights activists using Twitter.
  • Participated in the #HackingConflict, a Dutch-Canadian #DiploHackchallenge that explores how youth and technology can disrupt conflict and empower nonviolent activism amidst the maelstrom of war.
  • Op-eds, interviews, articles in major news oulets 
  • Workshops
Mapping the Artificial Intelligence, Networked Hate and Human Rights Project

With the support of Global Affairs Canada, this project aims at better understanding the intersection between artificial intelligence and human rights with a special focus on the topic of online hate. The project will facilitate dialogue between human rights experts and those working in the field of AI in order to increase knowledge sharing on how networked hate can be countered effectively.

In March 2018, MIGS and Tech Against Terrorism will hold a workshop on AI and human rights, and launch the Data Science Network. The network will serve as a collaborative space for start-ups, big tech, academia and researchers, with the aim of utilising methods such as artificial intelligence (AI) and machine-learning to build tools to prevent terrorist abuse of tech.

Tech Against Terrorism is a UN-mandated organisation helping tech companies confront terrorist exploitation of their services. The organisation has engaged with over 150 global companies, and work with Facebook, Google, Twitter and Microsoft in the Global Internet Forum to Counter Terrorism. In 2017, Tech Against Terrorism launched the Knowledge Sharing Platform, a database helping companies protect their services from terrorist exploitation, at the United Nations headquarters.

Back to top

© Concordia University