Free speech rights recently secured an important legal win against one of California’s overly broad deepfake laws. The case underscores the ongoing difficulty of state legislators trying to regulate AI-generated content without infringing on constitutionally protected speech.
The California law, Assembly Bill 2655, would have required social media platforms to remove or label “materially deceptive” AI-generated political content near elections. Elon Musk and X, formerly Twitter, sued, and a federal judge just struck down the law, ruling it was preempted by Section 230 of the Communications Decency Act, which protects online platforms from liability for user-generated content.
Last year, the state enacted two political deepfake laws in the lead-up to the presidential election. AB 2655, the subject of Musk’s lawsuit, would have mandated online platforms act against deceptive political deepfakes.The other law, Assembly Bill 2839, would have banned the creation and distribution of any political deepfakes depicting a candidate “doing or saying something that the candidate did not do or say” 120 days before and 60 days after an election. A federal judge blocked that law for lacking protections for parody and satire, which are essential pieces of free speech.
While the court rulings protecting free speech were correct, it is important to note that some concerns about deepfakes are well-founded. AI-generated media can depict people saying or doing things they never did, leading to reputational damage, misinformation, and defamation. This includes non-consensual, sexually explicit content known as “revenge porn.”
In 2024, California updated its existing laws to explicitly outlaw the creation and distribution of AI-generated revenge porn, giving victims and law enforcement stronger tools to address harms.
Rather than trying to overly regulate this fast-evolving technology in ways that restrict protected speech rights, California’s best course lies in leveraging existing legal tools. The state’s defamation, fraud, privacy, and right of publicity laws already provide strong remedies for victims of harmful deepfakes.
If someone finds themselves falsely depicted in deepfakes spread online, it can be a painful and difficult process, but they should document the deepfake and report it to the hosting platform. Major social media sites such as Facebook, Instagram, X, and YouTube all have reporting tools for manipulated content, and reporting such posts temporarily blocks their spread. If the deepfake is defamatory, invades privacy, or results in emotional distress, individuals can pursue legal remedies under existing laws covering defamation and privacy violations. Non-public figures choosing to sue for defamation only need to prove negligence in court.
Key tools in addressing deepfakes, more specifically the potential spread of misinformation, are California’s existing political advertising disclosure laws. State law requires any political ads created or distributed by committees containing AI-generated or substantially altered images, audio, or video to include a clear and conspicuous disclosure stating that the content has been altered using artificial intelligence. The law provides an exception for ads that only use AI in the editing process. Like other disclosures in political ads, deepfake disclosures are meant to inform voters of what they are viewing and provide for more transparency. In this way, California’s current framework is already well-equipped to address concerns over misinformation in AI-generated campaign material. Technology can also help. There are AI detection tools increasingly good at identifying fake content, and media literacy initiatives could further help the public better recognize and question manipulated media.Rather than holding online platforms broadly liable, which risks over-removal of legitimate content and threatens free speech protections, the state’s policy should focus on empowering users and institutions to address abuses directly. Liability should be imposed on the bad actors who create and distribute illegal deepfakes, rather than on the platforms that host third-party content.
Like any new technology, it can be used for both good and bad. Some of the risks of deepfakes are real and concerning. But rather than trying to implement broad mandates that violate the First Amendment, the state should enforce existing laws that can protect Californians without sacrificing core liberties.
Richard Sill is a technology policy analyst at Reason Foundation.