Skip to main content

Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes

A Joint Report by UNICRI and UNCCT

New technologies and artificial intelligence (AI) in particular, can be extremely powerful tools, enabling big advances in medicine, information and communication technologies, marketing, transportation among many other research fields. However, they can also be used for malicious purposes when falling into the wrong hands. The scope of this report - Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes - is to contribute to understanding the potential risk of AI falling into the hands of terrorists.

Although terrorist organizations have, to a certain degree, traditionally tended to employ various forms of “low-tech terrorism” such as firearms, blades and vehicles, terrorism itself is not a stagnant threat. As soon as AI becomes more widespread, the barriers to entry will be lowered by reducing the skills and technical expertise needed to employ it. Therefore, the questions this report strives to answer are whether – or perhaps better “when” – AI will become an instrument in the toolbox of terrorism and, if that occurs, what the international community might reasonably expect.

This report should serve as an early warning for potential malicious uses and abuses of AI by terrorists and help the global community, industry and governments to proactively think about what we can do collectively to ensure new technologies are used to bring good and not harm.

This report developed jointly with the United Nations Counter-Terrorism Centre at the United Nations Office on Counter-Terrorism, has been made possible with the generous support of the Kingdom of Saudi Arabia.

Share