Politicians race to catch up laws with AI-generated pornographic content online
The rapid and recent dissemination of utterly vile deepfake explicit content featuring Taylor Swift online has sparked renewed calls, including from U.S. politicians, to criminalize the use of artificial intelligence in generating deceptive yet highly convincing explicit imagery.
The highly explicit images of the renowned U.S. pop diva have proliferated across various social media platforms, reaching millions of viewers, within the current week. Originally distributed on the Telegram app, one particular counterfeit image hosted on X (formerly Twitter) garnered a staggering 47 million views before being taken down.
One wonders if the cretins who viewed the online images would have done so – and furthermore, how they would have reacted to them – had the images been of their mothers, sisters, girlfriends, or wives
Nonetheless, X issued a statement, declaring, “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”
U.S. Congresswoman Yvette D Clarke (D-NY), expressed her concern on X, stating, “What’s happened to Taylor Swift is nothing new. For years, women have been targets of deepfakes without their consent. And with advancements in AI, creating deep fakes is easier & cheaper. This is an issue both sides of the aisle & even Swifties should be able to come together to solve.”
Indeed, the issue has created a refreshingly rare case of bipartisanship in the otherwise bitterly divided and often rancorous American national legislature, with Republican Congressman Tom Kean Jr (R-NJ) adding his voice to the clarion call for the statute books to be tightened up in order to stop such noxious exploitation, stating “It is clear that AI technology is advancing faster than the necessary guardrails. Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend.”
And a trend it most certainly is, one experiencing lamentable (but sadly not unexpectedly) exponential growth between 2019 and 2023 of some 550%, with 98% of all deepfake images pornographic in nature to one degree or another, 70% of the top online porn sites featuring such content, the 10 leading deepfake porn sites hosting monthly traffic of 34 million hits, 99% of individuals targeted being women, 48% of U.S. males surveyed admitting they had viewed such images, and 74% of those saying they felt no guilt about doing so (!) … shame on every one of you, a pox on all your houses!
Apart from the moral (or lack thereof) considerations of such images, deepfake pornography is not a victimless crime. The humiliation, anxiety, and consequential depression as a result of such violations – and that is exactly what they are – can cause untold long-term damage to those targeted; personally and professionally. While only a few countries and regions have laws dealing specifically with deepfake pornographic content, it is hoped that the Taylor Swift incident will provoke legislators to tighten the laws on such content, not to mention drop the hammer on the miscreants who peruse and/or proliferate it.