x

AI in trial: Legal challenges of generative technologies

23 七月 2025

by Vindhya S. Mani (1) Geethanjali KV

Introduction

Courts worldwide are faced with adjudicating complicated cases involving the nexus of AI and intellectual property law more frequently as generative artificial intelligence quickly changes how material is created and consumed. The legal issues raised by AI systems are both new and extensive, ranging from image-generation tools that mimic artistic approaches to massive language models trained on copyrighted materials. This article, in two parts, provides a summary of some of the most significant court rulings and current legal disputes across major jurisdictions involving both textual and visual works, where Courts have dealt with the effects of AI-generated content in the context of copyright protection and infringement. The aim here is to explore the changing contours of global jurisprudence in an era where the distinction between human and machine creativity continues to blur by looking at how various legal systems are dealing with these new concerns.

PART 1: AI and visual content

This part explores some of the important cases across jurisdictions wherein generative AI’s role in creating or altering visual content has sparked discussions about copyright violations. In this section emphasis is given more to how Courts are defining the legal limits of AI-generated imagery.

The Disney complaint

As Artificial Intelligence (‘AI’) models get more popular and accessible to the public, the question of copyright infringement through AI generated works has become more relevant than ever before. An important recent development in this regard is the complaint (‘Disney complaint’) filed before the District Court of California, by Disney Enterprises, Inc. and et al., (‘Plaintiffs’) against Midjourney, Inc., a generative AI platform and service provider, (‘Midjourney’) based in San Francisco, which alleges that the AI image generator committed both direct and secondary copyright infringement by using Disney’s copyrighted works, including iconic characters like Darth Vader and Spider-Man. The Plaintiffs claim that Midjourney’s services function like a ‘virtual vending machine’ for unauthorized reproductions of their protected work. It is further contended that a user can generally generate, through specific text prompts, any desired image which is currently contented to have been used to generate the unauthorized reproductions therein.

The complaint essentially asserts that Midjourney’s services are wilful, ongoing and harmful, arguing that it undermines the incentives provided under the copyright law of the United States of America and further threatens to harm the economic foundation of the entertainment industry. The Plaintiffs also accuse Midjourney of using their creative works without permission to promote their services and sell subscriptions, earning millions of dollars in revenue. Despite numerous cease-and-desist letters, and suggestions to adopt technological measures, Midjourney allegedly chose to ignore them. Through the present complaint, the Plaintiffs seek injunctive relief, jury trial and damages to stop what is being described as a large-scale intellectual property theft.

China

While the dispute between Disney and Midjourney is in the nascent stages and the Court is yet to consider the matter fully, one can only await a colossal unravelling of the legal dispute and its potential implications, in what is being described as a large-scale intellectual property theft. At this juncture, through this article we venture into taking an account of all the previous instances between, the digital disruption of this era, i.e., AI, and copyright infringement, to achieve the following objectives:

Pattern Recognition: To identify recurring legal and ethical challenges in AI-generated content use;

Comparative Evaluation: To assess how different jurisdictions have responded to similar copyright disputes involving AI.

As of June 2025, China has thus far been the only jurisdiction where AI platforms have been held liable for copyright infringement. In February 2024, Guangzhou Internet Court (‘GIC’) ruled in favour of Shanghai Character License Administrative Co. (‘SCLA’) against an unnamed generative AI platform found to have produced images substantially similar or identical to Ultraman, a Japanese superhero character. The GIC held that the outputs infringed the right of reproduction and the right of adaptation of the plaintiff, SCLA, who are the official Chinese licensor of Ultraman.

The GIC further held that generative AI providers are liable for copyright infringement when their systems generate infringing content, especially when users prompt the reproduction of well-known characters. The GIC also criticized the provider for failing to include proper risk warnings, labels that indicate that the image is generated through AI, or complaint and reporting mechanisms, all of which are requirements under China’s 2023 Interim Measures for the Management of Generative AI Services. In a fact situation similar to the Disney complaint, it would be interesting to note the differences in approach between USA and China, the dominant technological powers shaping the global AI landscape.

Not a first for midjourney in the US!

Previously, Midjourney and other AI image generators have faced claims of copyright infringement in the US earlier, as seen in the case of Andersen v. Stability AI. In this case, three visual artists, namely, Sarah Andersen, Kelly McKernan, and Karla Ortiz, filed a putative class action complaint, also before the District Court of California, alleging direct and induced copyright infringement, violations under the Digital Millennium Copyright Act, 1998 (‘DMCA’), false endorsement and trade dress claims based on the outputs of the defendants, comprising of, Stable Diffusion and DreamStudio, products of Stability AI; Midjourney, a product of Midjourney Inc.; and, DreamUp, a product of DeviantArt.

The Californian District Court issued an order granting, in part, and, denying, in part, the defendants’ motions to dismiss the first amended complaint. While the motions to dismiss claims of direct copyright infringement and other claims based on the Lanham Act, 1946, (governing Trademarks), filed by the defendants have been denied. The additional motions filed by the defendants to dismiss DMCA claims, unjust enrichment and breach of contract have been granted. As the matter is sub-judice, it's a legal cliffhanger and only time will tell as to which side the scales will tip!

Not a first for Stability AI!

It appears that Stability AI has been a popular defendant amongst several plaintiffs not just in the US but also across continents, in Europe as well. This time the plaintiff/complainant is the visual media company and supplier of stock images, Getty Images Holdings, Inc. (‘Getty’). Getty filed two suits against Stability AI, one in the US, before the District Court of Delaware, and, one in Europe, before the United Kingdon High Court, both titled Getty Images v. Stability AI.

In the UK, the plaintiff, Getty argues that the case should be treated as a straightforward IP‐enforcement case and that licensing content is ‘critical to AI’s success’- making the case for ‘use with payment’ rather than an AI focussed argument. In its written defence, Stability AI has emphasized that the model was trained on computers outside the UK and contended that only a small portion Stable Diffusion’s outputs resembled images available on Getty. All of Getty’s copyright and databaseright claims have survived pretrial motions to dismiss in the UK, so the High Court will fully consider whether Stability’s use of the images falls outside UK copyright and database regimes, making this part of the ever-expanding legal waiting room! It is also to be noted that as of 26 June 2025, Getty Images have dropped its claim of direct copyright infringement against Stability AI, citing ‘pragmatic’ reasons such as lack of knowledgeable witnesses and evidentiary challenges. The only claims that now remain are secondary copyright infringement, trademark infringement and passing off.

In the parallel US lawsuit filed in 2023, Getty makes largely the same assertions- that Stability AI copied over 12 million images available on Getty without permission to build its AI model. The U.S complaint also asserts trademark dilution as Stable Diffusion sometimes reproduces Getty’s watermark in AIgenerated art. Stability has responded with procedural motions- seeking to dismiss or transfer the case which have not been heard yet. In the UK, Getty can assert its unique database right and appears to have a considerably favourable case as there is no broad ‘fair use’ exception under the UK law. However, with the abandonment of the primary copyright infringement claim, this claim has also been abandoned. Regardless, Stability AI’s defence in the UK will focus on jurisdictional limits or safeharbour provisions rather than a general copyright exemption. Further, in the U.S, by contrast, there is no database-right and the dispute will hinge on whether Stable Diffusion’s use of Getty’s images is transformative and fair.

German Bench, European Code

The European Union’s approach to AI training has been different from that of the US, UK or China, as seen in the German case of Robert Kneschke v. LAION e.V. Photographer Robert Kneschke initiated legal proceedings in Hamburg District Court, Germany, against LAION e.V. (‘Large-scale Artificial Intelligence Open Network’), a non-profit organisation known for creating vast datasets used in AI training. Kneschke claimed that the dataset, often used as such by other LLMs, included his photographic images in their ‘LAION 5B’ without his consent.

The Hamburg District Court analysed Sections 44b and 60d of the German Copyright Act, 1965, which are modelled on the Articles 3 and 4 of the EU Directive on Copyright in the Digital Single Market. While an opt out mechanism exists in the respective provisions (Section 44b and Art 3 stated above) for most cases of AI training, no such opt-out is provided in the case of Text and Data Mining (TDM) for scientific purposes. In this case, the Hamburg District Court found that LAION, which establishes correlation between text and images, would constitute a scientific purpose and hence copyright infringement cannot be established. Interestingly, the Hamburg District Court has also mentioned that this scientific purpose would be fulfilled even if the data was used for future commercial use.

Japan’s legal code backs the bots

The Japanese approach can possibly be considered the most pro-AI approach amongst all the countries mentioned above. Under Article 30-4 of the Japanese Copyright Act, 1899, the training of generative AI on copyrighted material is generally allowed, on the grounds that such use does not unreasonably prejudice the interests of the original creator. This creates a relatively flexible legal environment for AI developers in Japan but also creates a loophole.

According to the interpretation issued by the Japanese Agency for Cultural Affairs, mere imitation of a general aesthetic or style does not constitute infringement unless it integrates protected elements of creative expression. However, it also declares that, where said ‘style’ uses identifiable ‘characters’ and ‘creative choices’, such mimicry may amount to copyright infringement. This is similar to the approach under French intellectual property law. Thus, generating an image in the ‘Ghibli style’ would not necessarily constitute an infringement, unless specific elements such as characters or sets are used, prompting the widespread trend of ‘Ghiblification’.

Thaler v. Perlmutter

The question of grant of copyright protection to artwork autonomously generated by AI might prove to be an important question in the future, considering the advancement in image generation software today. This question was dealt with in the US Court of Appeals of District of Columbia decision in Thaler v. Perlmutter, where it was affirmed that the US Copyright Act of 1976 does not provide for copyright protection of works entirely generated by AI. This argument was also centred around the rationale that the ‘author’ of a work must be human. While in this case, the Plaintiff did not raise the argument that he should be considered the author as he created the machine, this line of argumentation may be considered in future cases. Moreover, the question of how copyright protection is afforded to works partially created by AI with human input still remains.

PART 2: AI and textual content

There have been several lawsuits filed against AI developers, particularly pertaining to claims of copyright infringement through the process of data training. The cases listed below in this section are at different stages of trial, but all pertain to infringement of textual matter.

Doe v. Github

A group of anonymous plaintiffs filed this putative class action against GitHub, Microsoft and OpenAI before the US Northern District of California Court, alleging that defendants used plaintiffs copyrighted materials to train and create Codex and Copilot. Codex is the OpenAI model that powers GitHub’s programmer AI, Copilot. Each of the plaintiffs alleged that Copilot does not comply with the Open Source licenses governing plaintiffs’ code that was stored on GitHub.

The original causes of action included DMCA violations, breach of contract claims and multiple torts. Many consider this case to be one of the first class-action copyright case against an LLM provider, but it has never included any claims of direct or indirect copyright infringement. While the Court dismissed a majority of the plaintiffs’ claims due to their inability to prove an identical match, it has now sent the case to the Ninth Circuit to decide on whether the programmers needed to show an identical match in the first place. The decision of the Ninth Circuit is expected to resolve the divided opinions among district courts with regard to AI litigation. As of June 2025, only Amicus briefs have been filed before the Ninth circuit.

Dow Jones & Company, INC. v. Perplexity AI, INC.

The Plaintiffs, Dow Jones and New York Post have alleged before the Southern District of New York, that the Defendant’s AI platform, Perplexity, reproduces or makes use of their copyrighted news content in its output. They have also highlighted that the paid version of Perplexity AI, that is, Perplexity Pro, allegedly offers verbatim reproductions of the Plaintiff’s content. Perplexity has argued that it only ‘relies on publicly available factual information that is not protected by copyright law.’

It will be interesting to note the outcome of this case as Perplexity AI is a retrieval-augmented generation (RAG) product. RAG techniques ensure that LLMs do not solely rely on pre-trained knowledge and can access web pages to provide links or citations. Perplexity AI has filed a motion to dismiss or alternatively, transfer the case to California, which is yet to be heard.

Thomson Reuters Enter. Centre GMBH v. Ross Intelligence INC.

The Plaintiff, Thomson Reuters alleged before the District court of Delaware in 2020, that the Defendant, AI Ross, unlawfully used their subsidiary Westlaw’s copyrighted content to train an AI-powered research tool. The Plaintiff also alleged that Ross accessed Westlaw’s headnotes and other content indirectly through a third party named ‘LegalEase’. The defendants raised the defence of fair use, as they claimed it was purely for the purpose of innovation and competition.

A summary judgment was passed in February 2025, where the Court revised its earlier decision wherein it was held that Ross’ use of Westlaw’s content was fair use. This was reversed based on the fact that the content was not transformative enough to qualify for fair use, as the output of the model was not vastly different from the original content. However, the Court has granted the Defendant’s interlocutory appeal in April 2025 and certified the fair use question to the Third Circuit. Questions or propositions of law that require instruction from a higher court maybe referred to the higher court in the process of certification. The two questions that have been specifically referred here pertain to (1) whether the Westlaw Headnotes can be held sufficiently original for copyright protections, and (2) whether the use by Ross constitutes fair use. The question on whether headnotes are eligible for copyright protection has been dealt with in India earlier, in the case of Eastern Book Company v. D.B. Modak, where headnotes were held to have qualified for copyright protection due to the creativity involved in making them.

In RE: Google Generative AI copyright litigation

A group of plaintiffs, including artists, have file a complaint before the Northern District of California, alleging direct infringement against the Defendant, Google’s, use of copyrighted works through its AI products including Gemini. At this stage of the consolidated action, the court has been analysing whether the Plaintiffs can be considered a class for the purpose of these claims.

In April 2025, the court granted a motion in favour of the Defendant to remove the class allegations from the amended complaint. The Court agreed with Google in that the plaintiffs has proposed an incorrect fail-safe class definition, as determining class membership would require resolving the merits of each person’s claim. However, court granted plaintiffs the permission to amend the complaint to address and correct the issue in the definition of the class. The primary question of infringement has not been addressed by the Court yet, and all the action in the case has revolved around procedural issues as of now.

Advance Local Media LLC v. Cohere, INC.

The Plaintiffs comprising of several news and magazine publishers, including Condé Nast and Vox, have sued the Defendant AI company, Cohere, for copyright infringement, trademark infringement, and false designation of origin before the Southern District of New York Court. The suit claims that the Defendant has trained its series of large language models (LLMs) on the publishers’ content without permission and uses that material to drive its consumer chatbot and other services. The trademark complaints arise from the Plaintiffs’ allegation that fabricated content has been attributed to the plaintiff's publications. Moreover, the Plaintiffs also argue that they have licensed their content to other AI developers such as OpenAI and Perplexity through Copyright Clearance Center, with the Defendants using their content unlicensed.

As on May 2025, Cohere has filed a partial motion to dismiss the claims of secondary copyright infringement and trademark-related claims. Cohere has also argued that the plaintiffs failed to show instances of a real user accessing the infringing content, arguing that the cited examples were generated by the Plaintiffs themselves using a demo tool in violation of Cohere’s terms of service. The outcome of this motion will be particularly relevant to future cases as many cases involve examples generated by the Plaintiffs themselves.

Bartz v. Anthropic PBC

In this case filed before the US Northern District of California Court, the Plaintiffs, who are authors, alleged that the Defendant company Anthropic, committed copyright infringement by copying their books without permission, through pirated sources and destructive scanning of purchased copies. The main contention of the case was that the reproduction of their books in order to train AI systems was unauthorized and a copyright infringement.

The Court in its summary judgement dated 23 June 2025 has held that, using copyrighted works to train LLMs, without reproducing them in outputs or distributing them, qualifies as a ‘transformative use’. The Court likened the outputs of the Defendant AI Claude to human reading and writing. The key takeaway was that these copiers would not distribute copies of these works, citing important judgments such as Sony Betamax wherein it was held that time-shifting entire TV shows for personal use qualifies as fair use and that manufacturers of recording devices like VCRs are not liable for contributory copyright infringement.

While the judgement has been considered a pro-AI judgement for its observations on data training and fair use, there is an important distinction to note. The Court was firmly against the use of pirated books, saying that building a library using stolen books does not qualify as fair use. It was held that fair use does not offer protection to such unauthorized copying, and it was thus found that Anthropic’s use of pirated materials undermines this defence, especially given the alternatives at hand. The current order specifically denies a summary judgment for Anthropic that the pirated copies must be seen as training copies, and any matter relating to damages and liability remain reserved for a future judgment.

In RE: OpenAI INC., copyright infringement litigation

This Multi District Litigation (MDL) is a centralized action now transferred to Southern District of New York court, against the common defendants OpenAI and Microsoft, as they all involve a common question of law. The twelve cases that have been consolidated primarily involve plaintiff groups such as authors and news publishers who allege that their copyright protected works have been used to generate outputs by the AI platforms of the defendants.

While the Court has granted OpenAI’s motion to dismiss the Plaintiff’s unfair competition claims, it is yet to be seen if the plaintiffs can file a consolidated complaint against both the defendants, Open AI and Microsoft. The decision in Doe v. Github (discussed hereinabove) has been referred to by the Defendants to argue for a quicker ruling on its motion to dismiss and the appeal court ruling is expected to have a significant bearing on this case.

Kadrey v. Meta Platforms

This case, filed before the Northern District of California court, involves some of the Plaintiffs from the In re: OpenAI Inc., Copyright Infringement Litigation. Here the Plaintiffs have filed a similar complaint against Meta, alleging that Meta’s unauthorized copying of the plaintiffs’ books for purposes of training its LLM model constitutes copyright infringement. On June 26, 2025, the Court delivered a partial summary judgment in favour of Meta regarding the defence of fair use.

Analysing the four-factor test for fair use, the Court held the use be transformative, as LLMs are designed to serve functions that are vastly different from those of books. While the usage is commercial in nature, District Judge Vince Chhabria, notes that it did not outweigh the ‘transformative factor’.

While discussing the second factor, revolving around the nature of copyrighted work, Judge Chhabria declared that Meta uses statistical patterns in its work, but also says that they are a result of protected expressions, thereby implying that Meta’s use is fair. While dealing with the fourth-factor pertaining to the plaintiffs’ argument that the market for licensing books for AI training was harmed, the Court stated that the Plaintiffs do not have the right to monopolise the market.

In granting Meta the defence of fair use for its conduct, it appears that the Court may have arrived at a legally inconsistent position in this case. The analysis is filled with speculation and logical fallacies, such as the misreading of the Oracle judgment and claiming that statistical relationships are essentially protected expressions. Additionally, Judge Chhabria suggesting the theory that Meta’s works competing with human-authored works, goes completely against his earlier opinion that Meta’s outputs do not cause any significant threat to the market. As this is only a summary judgment, this judgement may require a thorough and careful reconsideration at the stage of final decision.

ANI Media (P) Ltd. v. Open AI INC

The Indian Courts approach towards copyright infringement through AI image generation has not crystallized yet, due to a lack of jurisprudence on the same. In the case ANI Media (P) Ltd. v. Open AI Inc, news publisher ANI Media sought an interim injunction against the unauthorized use and reproduction of its copyrighted work by OpenAI through its Large Language Model ChatGPT in the Delhi HC. This will be the first Indian case to examine the contentious issue of whether the usage of publicly available copyrighted data for the purposes of AI training would constitute copyright infringement.

The primary question asked by the Delhi High Court was whether using copyrighted news articles to train ChatGPT can fit within India’s fair-dealing exception under Section 52(1)(a) of the Copyright Act, 1957 (‘Section 52’).

While this case is yet to be decided, there have been some positive developments in the interim. In February 2024, the then Union Minister of State for Commerce and Industry, Shri. Som Parkash, stated that India’s existing copyright laws were adequate to address issues arising from AI-generated works and inventions. He also clarified there was no proposal under consideration to introduce a separate legal regime specifically for AI-generated content. The minister further noted that users of AI-generated works must obtain appropriate permissions if their usage goes beyond fair dealing provisions under Section 52. However, the Ministry of Commerce set up a panel of eight experts in April 2025 to examine issues related to AI and their implications for India's copyright law. This is a welcome move from the Ministry considering cases such as ANI v. OpenAI and its earlier position. A clear policy direction and better interpretation of existing laws will go a long way in ensuring a strong IP ecosystem around AI generated works.

Conclusion

As copyright holders increasingly seek legal recourse to protect their works from generative AI outputs, the true implications of this significant moment in global discussions on intellectual property will unfold over time. Prominent legal battles like Disney v. Midjourney and Bartz v. Anthropic PBC highlight essential issues related to AI, authorship, originality, and fair use. Different legal systems worldwide have shown a varied reaction, ranging from a more lenient approach in Japan to more cautious or restrictive measures in other jurisdictions. This article has explored not only cases regarding visual media but also conflicts over the use of copyrighted text in AI training and its outputs, further emphasizing the intricate nature and breadth of the issue. These developments highlight the increasing need for cohesive, unified, and progressive copyright regulations that can reconcile the demands of innovation with the rights of human creators. In the end, it remains to be seen of the legal system can adapt swiftly and thoughtfully enough to match the rapid developments of generative AI.

[The authors are Partner and Associate, respectively, in the IPR practice at Lakshmikumaran & Sridharan Attorneys]

Browse articles