A lawsuit pending before Judge J. Paul Oetken of the District Court for the Southern District of New York may soon underscore the need to rethink the legal remedies available for potential legal violations aided by generative Artificial Intelligence (“AI”).[1] As discussed previously on this blog, in July 2024, the U.S. Copyright Office issued the first of several planned reports on the intersection between copyright and AI. The Copyright Office’s report flags potential harms to certain interests from the use of digital replicas (i.e., videos, images or audio recordings digitally created or manipulated to realistically but artificially depict an individual). Those potential harms include, for example, use of AI-generated sounds or images instead of employing singers or voice actors to produce sound recordings.[2] The report also describes existing legal frameworks at the state and federal level that address threats posed by digital replicas. In New York, those include recent amendments to the statutory right of publicity to prohibit certain uses of unauthorized digital replicas of deceased professional performers, and, at the federal level, the Copyright Act.[3] The Copyright Office’s report concludes that “new federal legislation is urgently needed” to address shortcomings in current state and federal law.[4]

Lehrman et al v. Lovo, Inc.

On May 15, 2024, voice actors Paul Lehrman and Linnea Sage (together, the “Voice Actors”) commenced a proposed class action against Lovo, Inc. (“Lovo”), a startup company offering AI voice and text-to-speech generation. The Voice Actors allege that Lovo deceived them to obtain recordings of their voices, from which it generated AI clones without authorization, which it then used in Lovo advertising to attract new customers and raise millions of venture capital and made available for license to Lovo consumers.[5]

The Voice Actors assert multiple claims, including violations of the right of publicity provisions of the New York Civil Rights Law (which cover use of a person’s identity, including a person’s voice),[6] and the U.S. Copyright Act.[7] More specifically, the Voice Actors claim that Lovo violated the New York Civil Rights Law by unlawfully using their voices without written consent for advertising purposes, causing damage to their brands, and causing consumers to use Lovo’s AI voice generator instead of hiring voice actors, depriving the Voice Actors of compensation.[8] The Voice Actors claim that Lovo violated their copyrights in the audio recordings of their voices,[9] including by using those recordings to train generative AI,[10] and distributing “clones” (i.e., AI-generated digital replicas) of their voices.[11]

On November 25, 2024, Lovo moved to dismiss all claims in the action, accusing the Voice Actors of filing threadbare and otherwise insufficient claims that merely “try to tell a tale filled with pathos and the woes of artificial intelligence.”[12] On January 10, 2025, the Voice Actors filed their response in opposition to Lovo’s motion, in which they argue that their claims are “not about the potential dangers of the misuse of AI, but the actual damage to working people who have had their voices and brands hijacked . . . and have lost control of their brands and livelihoods.”[13] The parties’ respective arguments, particularly as to the Voice Actors’ claims for violations of the New York Civil Rights Law and the Copyright Act, provide an opportunity to examine the potential limits of those laws to address harms that AI-generated digital replicas pose.

The Voice Actors’ New York Civil Rights Law Claim

A key issue bearing on Lovo’s bid to dismiss the Voice Actor’s New York Civil Rights Law claim is the extent to which that statute covers use of AI-generated digital replicas. Lovo argues that the Voice Actors fail to state a claim because they allege only that Lovo used digital replicas of their voices (i.e., audio recordings digitally created to realistically imitate their voices), and, Lovo argues, the New York Civil Rights Law only covers use of audio recordings of their actual voices.[14]

Although New York amended the New York Civil Rights Law in 2020 (effective in 2021) to prohibit certain uses of unauthorized digital replicas of deceased professional performers, nothing in the statute prohibits use of digital replicas of living persons’ voices.[15] As the Copyright Office concluded in its July 2024 report on digital replicas, existing publicity laws are often “written too narrowly to cover all types of digital replica uses,” and some states—like New York—“restrict the right to limited groups of individuals.”[16]

Undeterred by the statute’s plain language, the Voice Actors argue that the right of publicity provisions of the New York Civil Rights Law apply to Lovo’s unauthorized use of AI-generated digital replicas of their voices because those digital replicas are “exact clones,” and therefore “recognizable as likeness of the complaining individual,” bringing the Voice Actors’ claim within the statute’s scope, even apart from the 2020 amendment.[17] The Voice Actors rely on a 2018 decision, Lohan v. Take-Two Interactive Software, Inc,[18] in which the New York Court of Appeals—the State’s highest court—found that “a graphical representation in a video game or like media may constitute a ‘portrait’ within the meaning of the Civil Rights Law,” but declined to apply the statute because the depiction at issue was not “recognizable” as celebrity plaintiff Lindsay Lohan.[19]

It will be interesting to see if the Voice Actors can successfully persuade Judge Oetken to rely on Take-Two, which predates the New York State Legislature’s amendments to the statutory right of publicity, to address unauthorized use of digital replicas in certain specific and limited circumstances.

The Voice Actors’ U.S. Copyright Act Claims

Lovo’s motion to dismiss also raises novel issues of federal copyright law. The Voice Actors base their U.S. Copyright Act claims on several alleged infringements, including (i) Lovo’s unauthorized distribution of “clones” (i.e., AI-generated digital replicas) of the Voices Actors’ voices; and (ii) Lovo’s unauthorized use of the Voice Actors’ recordings to train its AI generator.[20]

Lovo argues that it did not infringe on the Voice Actors’ claimed copyrights in the audio recordings of their voices by distributing outputs that Lovo’s generative AI created to sound exactly like those voices, because, Lovo asserts, “a clone or AI-generated voice imitating or simulating the sound of a voice in a sound recording is not protected.”[21] In making that argument, Lovo relies on the Copyright Office’s July 2024 report on digital replicas, which states that “[a] replica of [an individual’s] image or voice alone would not constitute copyright infringement.”[22]

In opposition, the Voice Actors also rely on the Copyright Office’s July 2024 report, quoting its statement that digital replicas “produced by ingesting copies of preexisting copyrighted works, or by altering them—such as superimposing someone’s face onto an audiovisual work or simulating their voice singing the lyrics of a musical work—may implicate those exclusive rights.”[23] However, the Voice Actors do not explain how Lovo’s alleged copyright violations are more analogous to the situation envisioned in the report—which, in context, relates to a copyright in the musical work itself, a work of authorship as required by the Copyright Act—than the excluded categories of “an individual’s identity” or “their image or voice alone.” Beyond noting the Copyright Office’s concerns about potential harms posed by generative AI, and one case that did not involve an “exact copy” of the document at issue, the Voice Actors do not cite any other support for their claim to a copyright in the sound files that Lovo’ generative AI created to sound exactly like their voices. The Voice Actors ultimately appear to recognize the weakness of their copyright claim based on Lovo’s unauthorized distribution of AI-generated digital replicas of their voices, urging that, as a procedural matter, at the motion to dismiss stage, “a refined analysis of the copying is not required.”[24]

As to the Voice Actors’ copyright claim based on Lovo’s unauthorized use of their audio recordings to train its AI generator, Lovo argues that use does not violate the Copyright Act because it is an “internal action” and also “fair use.”[25] The potential application of the “fair use” defense to a copyright infringement claim for unauthorized use of copyrighted works to train AI is a hotly contested issue in other ongoing litigations in U.S. District Courts in New York and California,[26] and remains unsettled. Most recently, on January 14, 2025, Judge Sidney H. Stein of the District Court for the Southern District of New York heard oral argument in The New York Times Co. v. Microsoft Corp.,[27] one of the first cases to address the issue. Curiously, in their opposition to Lovo’s motion to dismiss, the Voice Actors virtually ignore their AI-training infringement claim and Lovo’s fair use defense to that claim.

As previously reported on this blog, the Copyright Office intends to issue in the first quarter of 2025 a report addressing the “ingestion of copyrighted works to train AI models, including licensing considerations and the allocation of potential liability.”[28] We are also seeing a push to address ingestion of copyrighted works to train AI models in proposed federal legislation. For example, Senator Peter Welch of Vermont has proposed the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act, which would require AI developers to disclose to a copyright holder when their work was used in training generative AI.[29]

* * * *

We will continue to monitor the cutting-edge territory of generative AI and its intersection with existing copyright and other legal frameworks at the state and federal level for further developments.


[1] Lehrman et al v. Lovo, Inc., No 1:24-cv-03770 (S.D.N.Y. filed May 15, 2024).

[2] Copyright Office, Copyright and Artificial Intelligence, Part 1: Digital Replicas at 3 (July 31, 2024), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-1-Digital-Replicas-Report.pdf (“Report”).

[3] Report at 15-16 (discussing N.Y. Civ. Rights Law § 50-f(1)(a)-(b), (2)(b) (McKinney 2024)). 

[4] Report at 22.

[5] First Amended Complaint, Lehrman v. Lovo, Inc., No. 1:24-cv-03770-JPO at ¶¶ 56, 83, 85-93, 105-9, 120-28, 130 (S.D.N.Y. May 16, 2024) (ECF No. 22) (“Am. Complaint”).

[6] N.Y. Civil Rights Law §§ 50 & 51.

[7] 17 U.S.C. § 101, et seq.  In addition, the Voice Actors bring claims for breach of contract, unjust enrichment, conversion, Lanham Act violations, and violations of the New York False Advertising Act and New York Deceptive Practices Act.

[8] Am. Complaint ¶¶ 164-70, 182.

[9] Am. Complaint ¶¶ 289-91.

[10] Am. Complaint ¶ 302.

[11] Am. Complaint ¶ 302.

[12] Memorandum of Law in Support of Defendant’s Motion to Dismiss the Amended Class Action Complaint, Lehrman v. Lovo, Inc., No. 1:24-cv-03770-JPO at 2 (S.D.N.Y. Nov. 25, 2024) (ECF No. 28) (“Mot. to Dismiss”).

[13] Memorandum of Law in Opposition to Defendant’s Motion to Dismiss, Lehrman v. Lovo, Inc., No. 1:24-cv-03770-JPO at 2 (S.D.N.Y. Jan. 10, 2025) (ECF No. 33) (“Opp.”).

[14] Mot. to Dismiss at 10.

[15] Mot. to Dismiss at 10-11 (citing N.Y. Civil Rights Law § 50-f).

[16] Report at 12.

[17] Opp. at 9 (citing Onassis v. Christian Dior-New York, Inc., 472 N.Y.S.2d 254, 259 (N.Y. Sup. Ct. 1984); Young v. Greneker Studios, Inc., 175 Misc. 1027, 1028 (N.Y. Sup. Ct. 1941)).

[18] Opp. at 10-11 (citing Lohan v. Take-Two Interactive Software, Inc., 31 N.Y.3d 111 (N.Y. 2018)).

[19] Lohan v. Take-Two Interactive Software, Inc., 31 N.Y.3d 111, 122-23 (N.Y. 2018)

[20] Am. Compl. ¶¶ 291, 301-3.

[21] Mot. to Dismiss at 34.

[22] Mot. to Dismiss at 34 (citing Report at 17).

[23] Opp. at 32 (citing Report at 17).

[24] Opp. at 32.

[25] Mot. to Dismiss at 33-34.  Lovo relies on a 2019 decision, Yang v. Mic Network, Inc., 405 F. Supp. 3d 537 (S.D.N.Y. 2019), in which the District Court for the Southern District of New York considered application of the fair use doctrine in the context of a screenshot, that did not involve AI training.

[26] See, e.g., The Intercept Media, Inc. v. OpenAI, Inc., No. 1:24-cv-01515-JSR (S.D.N.Y.) (where the Court recently permitted discovery to move forward) and Concord Music Grp., Inc. v. Anthropic PBC, No. 5:24-cv-03811 (N.D. Cal.).

[27] No. 1:23-cv-11195-SHS-OTW (S.D.N.Y.).

[28] U.S. Copyright Office, Letter to Senators Chris Coons and Thom Tillis, and Representatives Darrell Issa and Henry C. Johnson at 2 (Dec. 16, 2024), https://www.copyright.gov/laws/hearings/US-Copyright-Office-Letter-To-Congress-Providing-Updates-On-Its-Artificial-Intelligence-Initiative.pdf.

[29] See Daniel Tencer, Music Industry Backs New ‘TRAIN Act’ Requiring Transparency in Materials Used to Train AI, Music Business Worldwide (Nov. 26, 2024), https://www.musicbusinessworldwide.com/music-industry-backs-new-train-act-requiring-transparency-in-materials-used-to-train-ai/.  The U.K. government has also proposed legislation providing an opt-out copyright exemption to allow generative AI companies to train their works, a model the U.S. could follow in the future.  See Dan Milmo and Robert Booth, UK proposes letting tech firms use copyrighted work to train AI, The Guardian (Dec. 17, 2024), https://www.theguardian.com/technology/2024/dec/17/uk-proposes-letting-tech-firms-use-copyrighted-work-to-train-ai.