Skip to content

The Dark Side(s) of Generative AI Images and Video

Introduction

Generative art has become a potent weapon for bad actors. From misleading propaganda and deepfake impersonations to copyright theft and fraud, malicious users are exploiting AI art to cause real harm and profit from deception.

Propaganda

AI-generated deepfakes and fabricated images have been used to spread false narratives. For example, a deepfake video was created of Ukrainian President Volodymyr Zelenskyy surrendering the Russo-Ukrainian War to Russia.

Non-consensual pornography

Another issue that has received widespread attention is deepfake pornography. Explicit images of unknowing people are being created online, almost always created without consent, to defame and humiliate. Deepfakes are almost all pornographic in nature - Dutch cybersecurity startup Deeptrace estimated that 96% of all deepfakes online were pornographic

There is also the issue of Copyright Infringement & Plagiarism, which I touch upon here.

Online scams

Finally, fraud, scams, and impersonation crimes have upgraded with the advent of AI-generated images and deepfakes. Fraudsters can leverage realistic AI-generated faces and voices to create fake personas online. These fake identities are then used in scams. From the impersonation of business leaders to catfishing victims into romantic or investment scams, a creative bad actor now has a lot more tools to play with. For example, hackers are using the likeness of Elon Musk in deepfake videos to try and perform an online scam

Impact

The use of generative AI in this way further undermines the already weakened trust the public has for digital media. Furthermore, it poses great challenges to legal, ethical, and regulatory frameworks around the world. A coordinated response among lawmakers, big tech companies, and researchers is needed to safeguard against these risks - but we aren’t there yet.