What to know
- San Francisco is taking legal action against 16 AI-powered “undressing” websites.
- These sites create non-consensual nude images of women and children.
- The lawsuit aims to shut down the websites and hold operators accountable.
In a bold move against digital exploitation, San Francisco’s city attorney is cracking down on websites that use artificial intelligence to create fake nude images. David Chiu announced a lawsuit targeting 16 popular “AI undressing” platforms that have sparked controversy and concern.
These websites allow users to upload photos of clothed individuals, then use AI to generate realistic nude versions without consent. The technology has been misused to create explicit images of both adults and minors, raising serious ethical and legal questions.
According to Chiu’s office, the targeted sites received over 200 million visits in just six months this year. This staggering number highlights the widespread nature of the problem.
The lawsuit accuses the website operators of violating multiple laws, including those prohibiting deepfake pornography, revenge porn, and child exploitation. It seeks to shut down the platforms and impose hefty fines.
Chiu expressed his horrors at the exploitation women and girls have endured, stating on X:
This investigation has taken us to the darkest corners of the internet, and I am absolutely horrified for the women and girls who have had to endure this exploitation.
This is a big, multi-faceted problem that we, as a society, need to solve as soon as possible.
— David Chiu (@DavidChiu) August 15, 2024
The legal action comes amid growing concern over AI-generated deepfakes. High-profile cases, like fake nude images of Taylor Swift circulating online, have brought attention to the issue. However, countless ordinary individuals have also fallen victim to this technology.
One troubling incident involved AI-generated nude images of 16 middle school students in California. Such cases demonstrate how this technology can be weaponized for bullying and harassment, especially among young people.
Deepfake pornography poses huge risks to the mental health of the victims, as well as their reputations and future prospects. The problem of tracing the origin of these images is further compounded once these start to spread online.
San Francisco’s lawsuit represents a significant step in combating the misuse of AI for creating non-consensual intimate imagery. The case also represents the wide gap between the rapid advancements in AI technology and the legal and ethical frameworks that are struggling to keep pace.
The city’s action could set a precedent for how other jurisdictions address this growing threat. It also sends a clear message that the creation and distribution of non-consensual deepfakes will not be tolerated.
As this legal battle unfolds, it highlights the urgent need for comprehensive regulation and ethical guidelines in the rapidly evolving field of artificial intelligence.
Discussion