Trend Micro Research & UNICRI & EC3, 2020. — 80 p.
This joint report by Trend Micro, UNICRI, and Europol was initiated by a simple question: “Has anyone
witnessed any examples of criminals abusing artificial intelligence?” In the pursuit of answering this
inquiry, the report managed to provide a collective understanding of current and future malicious uses
and abuses of AI.
AI promises greater efficiency and higher levels of automation and autonomy. A subfield of computer
science with many cross relationships with other disciplines, AI is intrinsically a dual-use technology at
the heart of the so-called fourth industrial revolution. As a result of this duality, while it can bring enormous
benefits to society and help solve some of the biggest challenges we currently face, AI could also enable
a range of digital, physical, and political threats. Therefore, the risks and potential criminal abuse of AI
systems need to be well-understood in order to protect not only society but also critical industries and
infrastructures from malicious actors.
Based on available insights, research, and a structured open-source analysis, this report covered the
present state of malicious uses and abuses of AI, including AI malware, AI-supported password guessing,
and AI-aided encryption and social engineering attacks. It also described concrete future scenarios
ranging from automated content generation and parsing, AI-aided reconnaissance, smart and connected
technologies such as drones and autonomous cars, to AI-enabled stock market manipulation, as well as
methods for AI-based detection and defense systems.
Using one of the most visible malicious uses of AI — the phenomenon of so-called deepfakes — the
report further detailed a case study on the use of AI techniques to manipulate or generate visual and audio
content that would be difficult for humans or even technological solutions to immediately distinguish from
authentic ones.
As speculated on in this paper, criminals are likely to make use of AI to facilitate and improve their attacks
by maximizing opportunities for profit within a shorter period, exploiting more victims, and creating new,
innovative criminal business models — all the while reducing their chances of being caught. Consequently,
as “AI-as-a-Service” becomes more widespread, it will also lower the barrier to entry by reducing the
skills and technical expertise required to facilitate attacks. In short, this further exacerbates the potential
for AI to be abused by criminals and for it to become a driver of future crimes.