The Dangers of Deepfakes
- Luke Roberts
- Jun 1
- 1 min read
Updated: Jul 13

Generative AI has the potential to be misused in ways that exploit people’s bodies without their consent.
One alarming example is the creation and distribution of manipulated or sexually explicit images using someone’s likeness. This kind of misuse can cause serious emotional harm, violate personal privacy, and damage reputations, especially among young people and minors.
With the newly-passed “Take It Down” Act, the consequences are now more serious than ever before. It’s crucial to raise awareness about these risks and to establish clear ethical guidelines and protections to prevent the exploitation of individuals through AI-generated content. The data below reveals a startling amount of deepfakes being generated by school-age and high-school age children:
• As of October 2024, the National Center for Missing & Exploited Children (NCMEC) receives approximately 450 reports per month related to AI-generated child sexual abuse material (CSAM).
• In 2023, NCMEC’s CyberTipline received about 4,700 reports involving AI-generated CSAM, a significant increase from previous years.
• A 2023 report by Thorn found that 11% of American children aged 9 to 17 knew of a peer who had used AI to generate nude images of other kids.
• A survey by the Center for Democracy and Technology revealed that 15% of high school students reported hearing about a deepfake depicting someone from their school in a sexually explicit manner.
.png)



Comments