top of page

Who Owns a Voice? AI Cloning, Identity, and the Limits of IP

  • Writer: FIO Legal Solutions
    FIO Legal Solutions
  • 4 days ago
  • 5 min read

Author: Luiza Rey

In the wake of our previous article, No Author, Many Violations: AI-Generated Music and the Limits of Copyright and Personality Rights, this contribution advances the discussion by shifting the focus from authorship to identity.


A voice artist standing in a dark studio looking at a massive wall screen displaying a glowing synthetic voice waveform and code, visualizing the conflict between human identity and AI cloning laws.

It examines the legal challenges posed by voice-cloning technologies and questions whether traditional intellectual property frameworks are adequate to protect personal identity, dignity, and autonomy in the era of artificial intelligence.


What recent U.S. litigation signals about synthetic voices, why most federal IP tools fall short, and how EU personality rights and platform duties shift the analysis.


Generative AI now convincingly imitates human identity. Yet the law struggles to decide who controls it. In the previous article, we looked at how a viral, AI-generated track layered on a Taylor Swift song and a cloned Brazilian voice crystallized a practical problem facing creators and platforms: if a voice can be replicated perfectly, who owns it—and under what legal theory?


A 2025 decision from the U.S. District Court for the Southern District of New York offers one of the clearest signals so far: existing federal intellectual property law rarely protects AI-generated voice clones. Where protection exists, it often lies outside copyright and trademark and inside state-level rights of publicity—or, in the EU, within personality rights and platform regulation.


In Lehrman v. Lovo, Inc., two professional voice actors sued an AI text to speech company for creating and commercially exploiting synthetic versions of their voices. According to the complaint, Lovo solicited recordings under an internal use of assurance; one plaintiff later discovered his voice had been cloned after hearing a podcast narrated by a digital replica. Lovo allegedly marketed voices to subscribers, promising outputs “practically indistinguishable” from real speakers and the ability to “clone to perfection.”


A professional voice actor in a recording booth looking shocked at his smartphone screen, discovering his voice has been cloned by AI on a podcast without his consent.

The claims were broad: false association and false advertising under the Lanham Act, copyright infringement, state publicity rights, contract breaches, and unfair competition. Several claims faced significant hurdles, were dismissed/contested.


Trademark (Lanham Act § 43(a)) The court recognized that a voice is not categorically excluded from trademark protection. But trademark only reaches features that function as source identifiers. Here, the plaintiffs were professional voice actors—voices as tools of trade, not brands with marketplace recognition tied to a single commercial source. Without secondary meaning or likely consumer confusion about origin or endorsement, the Lanham claims faltered. 


The court warned that expanding trademark to any recognizable voice risks turning it into a general right of persona, a role trademark doctrine does not serve.


U.S. copyright does not protect a voice per se, nor the imitation of vocal characteristics. Section 114(b) limits protection of sound recordings to the fixation of particular sounds, not to independent recordings that simulate them. Even if an AI model trains on copyrighted recordings, output is not infringing unless it reproduces protectable expression from the original fixation. Accurate mimicry is not copying under the statute. One narrow claim survived: alleged use of an original recording in promotional materials beyond the license scope. The rest was dismissed.


A software developer's computer monitor displaying an "AI Voice Marketplace" interface where users can buy synthetic voices, illustrating the commercialization of digital identity.

Federal IP law offers limited protection for voices. The right of publicity—the primary U.S. tool for controlling commercial exploitation of name, image, likeness, and often voice—exists almost entirely under state law. Protection varies widely: some states have explicit statutory coverage of voice; others rely on common-law misappropriation or consumer deception. The result is jurisdictional fragmentation: the same AI voice cloning practice may be lawful in one state and actionable in another, with remedies spanning damages, injunctions, and disgorgement depending on forum and facts.


Recent filings, including Matthew McConaughey’s applications to register sound and image-based marks with the USPTO, illustrate a pragmatic approach. Trademark registration does not grant ownership of identity in the abstract; it creates a commercial reference point. When AI generated content suggests endorsement or origin, trademark law becomes an enforcement tool against confusion—especially in advertising, labeling, and platform commerce. It is a complement, not a substitute, for publicity rights.


Record labels have sued generative AI companies alleging that largescale ingestion of sound recordings for training constitutes infringement. These cases will shape the legality of training practices and the contours of fair use or implied license defenses. But even a finding of infringement in training does not create exclusive rights over a voice as an identity attribute. The downstream question—who owns a voice—remains outside core copyright doctrine.


A corporate compliance officer in a Brussels office holding a tablet that displays a "Synthetic Media Detected" alert, representing EU regulations and the Digital Services Act.

EU legal systems emphasize personality rights, human dignity, and protection against identity misappropriation. In many Member States, a person’s voice is treated as an extension of personality, protected independently of copyright. Unauthorized synthetic use can trigger civil liability (including injunctions and damages) even without copyright infringement.


Concurrently, EU regulation trains its lens on intermediaries. Under the Digital Services Act, large platforms face heightened duties of diligence, transparency, and systemic risk mitigation. Obligations to act swiftly on notice and to prevent repeated dissemination of unlawful or misleading synthetic content reshape enforcement: liability can reach both creators and platforms that host, recommend, or monetize synthetic voice content. In the EU, AI voice cloning is less a narrow IP dispute and more a question of identity abuse, consumer deception, and platform compliance.


What Practitioners Should Do Now For AI developers


Build consent-based pipelines: document informed consent, license scope, and revocation terms.


  • Track jurisdictional exposure: map training, deployment, and audience locations to state publicity regimes (U.S.) and personality rights (EU). 

  • Design provenance by default: watermarking, disclosure, and optout tools reduce platform risk and support notice and take down compliance.


For performers and public figures


Architect identity rights: combine state publicity claims (U.S.) with targeted trademark registrations for signature voice tags and visual marks.


  • Contract for voice: negotiate recording licenses that bar cloning, prohibit derivative synthesis, and require audit trails. 

  • Monitor platforms: use takedown pathways and escalate where systemic risk or repeated dissemination persists.

 

Key Takeaways


A legal strategist writing on a glass whiteboard, connecting terms like Copyright, Trademark, and Consent to map out a comprehensive protection strategy for AI identity rights.
  • A voice is not automatically protected by copyright. 

  • Trademark protects voices only when they function as commercial source identifiers. 

  • State publicity rights are the strongest U.S. protection for the exploitation of synthetic voices; however, their scope and enforcement vary significantly from state to state and are subject to important constitutional defenses.

  • EU Member States offer broader personality-based protection, including under the GDPR and AI-related regulation, with the DSA imposing stronger platform duties. 

  • Creators must protect identity proactively (through contracts, trademarks, and monitoring) not reactively after misuse.

  • Contract is king: many disputes are resolved through the contractual allocation of rights and licenses, as well as through clear restrictions on training and cloning activities.


There is no single legal answer to “Who owns a voice?” What exists is a mosaic: copyright limits, trademark constraints, state publicity rights, EU personality rights, and platform regulation. AI did not create this fragmentation, but it has made it unavoidable. Until more unified frameworks emerge, control over a voice will depend on context, jurisdiction, and how effectively identity can be linked to existing legal rights. Consent (not code)—remains on the dividing line.


This article provides general information and does not constitute legal advice. Outcomes vary significantly by jurisdiction and facts; practitioners should verify current case status and statutory developments as of January 2026.


By: Luiza Rey

Contact

Lets Figure it Out Together!

Start Your Next Chapter in Portugal.


Let us know how we can assist, and we’ll handle the rest!

What's your name?

How can we help you?

Please select the subject that best describes the matter you need our support with.

Where do you live?

Feel free to share any additional details or information that you think would be helpful for us to know. Please mention if there is anyone in our team that you wants to speak with.

bottom of page