On August 8, a member of the HDFF team attended a webinar hosted by the American Foreign Policy Council, called Radicalization, Counter-radicalization, and AI.” The webinar discussed the policy paper by Priyank Mathur, the founder and CEO of Mythos Labs, Inc.; Clara Broekaert, a Research Fellow at the Soufan Center; and Colin Clark, a terrorism expert and Senior Research Fellow at The Soufan Center. The presentation discussed the threats and opportunities presented by AI in terrorism and counterterrorism (CT.)
Technology is a “force multiplying tool” for terrorists. By leveraging advanced technologies, terrorist organizations can project an image of greater power and capability than their actual operational capacity may warrant. These advanced technologies include drones, communication technologies, and AI.
Terrorists and AI
Mathur pointed out that a year ago, the conversation surrounding terrorists and AI would have been theoretical. However, today, we can discuss specific observations of how terrorists are already using AI to enhance their capabilities. He laid out 3 recurring examples.
- To create synthetic media. Terrorists create fake media by manipulating certain images using AI, spreading realistic disinformation to the public. Hamas has exploited the tragedy in Gaza to spread disinformation by creating fake media about Israeli attacks and victims.
- Deep fakes. Deep fakes are an advanced form of creating synthetic media, which are any form of media (videos, pictures, or audio) made with AI to appear real. In many cases, they are not initially distinguishable from reality, which makes it more difficult for viewers, news agencies, and governments to determine their validity. ISKP, the Islamic State Khorasan Province, uses deepfake AI generated news anchors to spread propaganda and disinformation on their news platform. Other extremist groups have created AI generated Nasheeds, which are a particular form of song and hymns within Sunni Islam, to spread extremist ideologies.
- Transcription and translation. Rapid translation capabilities provided by AI have exponentially increased the reach of terrorist groups. Islamist terrorist organizations based in the Middle East can quickly and effectively translate materials from Arabic to other languages, which increases the audience to sympathizers living outside of the Middle East. This has been particularly prevalent with Central Asian languages, such as Tajik.
AI is free, easy to use, and quickly becoming ubiquitous. Terrorists are quickly learning how it can enhance their capabilities. For example, The Islamic State and al-Qaeda have both put together AI support guides for their members.
Intersection of AI and Counter-extremism
According to the authors, AI is also a tool for countering violent extremism (CVE). The same way terrorists are using AI to reach a broader audience, CT actors can use AI as a de-radicalization tool. There has never been an accurate way of screening CVE language for de-radicalization, but AI large language models have been developed to simulate terrorist groups and individuals in a closed system to test out counter-narratives and enhance CVE capabilities. These AI systems could eventually be deployed for use in rehabilitation and de-radicalization programs. AI is a force multiplier for CT actors just as much as it is for terrorists. According to the authors, it just requires sufficient attention by policymakers moving forward. They argue that governments should increase accountability for big-tech companies regarding extremist content. This requires developing comprehensive AI policies as well as public-private partnerships to learn how to combat extremist ideology on social media.
Conclusion
Moving forward, AI will only become more prevalent among terrorist actors. It is an easy, effective, and cheap tool. The authors of the article stressed that CT actors must respond by developing a greater understanding of AI technologies, which will require a significant, but necessary, investment of both time and resources.
Comments are closed