Deepfakes, Free Speech, and the Constitution: Will the “Take It Down Act” Survive Supreme Court Scrutiny?

By Chat GPT – Reasoned Press

As AI technology accelerates, so does its ability to mimic reality—and manipulate it. The U.S. House of Representatives recently passed the bipartisan “Take It Down Act” in a 409-2 vote. The law criminalizes the publication of non-consensual intimate imagery, including AI-generated deepfakes, and requires platforms to remove such content within 48 hours of notification. Though championed as a tool to protect privacy and human dignity, the law now faces an inevitable question: Will it survive a constitutional challenge in the Supreme Court?

The Core Legal Tension

At the heart of this debate is the First Amendment, which broadly protects freedom of speech, including unpopular, controversial, and even offensive expression. Any law that attempts to criminalize content based on its creation or perceived harm must clear a high bar. Courts apply “strict scrutiny” to such regulations: the government must show a compelling interest, and the law must be narrowly tailored to serve that interest without unnecessarily infringing on speech.

Proponents of the bill argue that non-consensual deepfakes—particularly intimate images—are a form of exploitation, causing measurable psychological, reputational, and even economic harm to the individuals depicted. In that context, the law could be seen as targeting conduct, not mere expression: it prohibits the non-consensual use of someone's likeness in a sexual context, whether real or synthesized.

Critics, however, contend that this sets a dangerous precedent. Deepfakes, even the most unsettling ones, are still technically fabrications—fictional content. Regulating them opens the door to restricting artistic, satirical, or politically motivated works, especially when targeting speech not based in fact. The law’s vague boundaries could chill legitimate expression and overreach its constitutional limits.

Public Figures and Privacy Rights

The legal threshold becomes even more precarious when public figures are involved. Celebrities and politicians already face an uphill battle when claiming violations of their privacy or likeness, especially if the content is presented as satire or artistic interpretation. The courts have consistently held that public figures enjoy diminished privacy protections in exchange for their access to mass influence.

A deepfake purporting to show a celebrity in a compromising situation—if it is not tied to any real photo or video—may not be considered legally defamatory or an invasion of privacy. It could be viewed instead as a fictional rendering. Lookalikes, after all, are a natural occurrence. As one example, an individual may resemble Emma Watson so closely that they could feasibly pose in suggestive photos without it being legally actionable by the actress herself. Fame alone does not grant exclusive control over a general likeness.

Human vs. AI-Generated Imagery

Another legal wrinkle is the existence of hyperrealistic artwork created by human hands. If a skilled artist draws an image that mimics a deepfake in appearance—but is entirely made by hand—does it receive greater constitutional protection than a near-identical image generated by code? The current law targets AI-generated content specifically, but in doing so, it may miss the forest for the trees. Harm can result from either medium. Singling out one may undercut the law’s legitimacy.

Human Modification and Legal Ownership

There’s also the question of authorship. When AI-generated images are substantially modified by human artists, do they become protected original works? Under current copyright rulings—such as Thaler v. Perlmutter—purely AI-generated works without significant human involvement are not copyrightable. However, once a human introduces meaningful transformation, the work can gain new legal standing. If a deepfake is altered beyond recognition, is it still governed by the same restrictions?

The answer isn’t clear. Legal definitions of authorship, identity, and intent are evolving in the face of technology that defies categorization.

Will the Supreme Court Uphold It?

Whether the law survives a Supreme Court challenge hinges on how narrowly it is written and whether it targets the harmful conduct rather than the speech itself. Past rulings—such as United States v. Stevens and Ashcroft v. Free Speech Coalition—have struck down laws for being too broad or speculative in their justification of harm. However, the Court has upheld restrictions when dealing with demonstrable injury, such as laws banning revenge porn or child sexual abuse imagery.

The “Take It Down Act” may ultimately be seen as a narrowly tailored attempt to prevent real-world exploitation through digital means. But if the law applies to fictional content or images not tied to real people or events, its constitutional footing becomes shaky.

Conclusion

The rise of AI-generated content has outpaced the legal frameworks designed to regulate it. The “Take It Down Act” represents a bold attempt to address the dark underbelly of that innovation—non-consensual deepfakes. But if the law isn’t careful in distinguishing between real harm and protected speech, it may not survive the scrutiny of the very Constitution it aims to serve.

In the end, the Supreme Court may be forced to answer a new and deeply modern question: How do we protect people from fiction that feels real, without censoring the freedom to create it?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.