Recent Developments in IP Case Law on Generative AI (6th Update)
15 December 2025
In this sixth update on IP developments concerning generative AI, we highlight three pivotal developments across the UK, Germany, and the US. These cases underscore the different approaches courts are taking towards specific technical aspects of AI models – from the non-infringing nature of “model weights” in the UK to their characterization as an infringing “fixation” in Germany – and the massive financial risks associated with training AI on pirated sources.
UK
In November 2025, the High Court of Justice delivered its long-awaited judgment in Getty Images v. Stability AI, providing a core finding while leaving the most critical copyright issues for Generative AI unresolved.
In a significant win for AI developers regarding imported models, the High Court rejected Getty’s claim of secondary copyright infringement. Getty argued that importing the pre-trained “Stable Diffusion” model into the UK constituted dealing in “infringing copies.” The Court dismissed this, ruling that the model’s “weights” (numerical parameters learned during training) do not constitute copies of the original training data because they do not store the visual information of the original works. The model was held to be the product of patterns learnt rather than a repository of the works themselves.
The practical effect of the judgment was dramatically narrowed because Getty withdrew two primary copyright claims in the course of the litigation. The copyright claim concerning the actual training process was dropped because the training took place outside the UK, leaving the legality of UK-based training an open question. Similarly, the claim concerning infringing outputs was abandoned after Stability blocked the relevant prompts.
Nevertheless, the Court did find in Getty’s favor regarding trademark infringement. It held that Stability AI infringed Getty’s trademarks where users prompted the model to generate images that reproduced Getty’s watermarks. Although the Court noted these instances were historic and limited, it establishes that AI outputs retaining branding elements can trigger liability.
Since the legality of AI training and liability for potential “memorized” content remains unanswered by the court, the focus is now firmly on the UK Government’s response to the consultation it published, where industry stakeholders and rights holders continue to lobby for either broad fair use exceptions or strict “opt-out” licensing regimes.
Germany
While the UK court in Getty found that mathematical model weights are not a form of reproduction, a week later the Munich Regional Court took a strict stance on “memorization” in GEMA v. OpenAI, concluding the opposite when reproduction occurs.
Ruling in favor of the collecting society GEMA, the Court held that OpenAI had infringed copyright by using protected song lyrics to train its GPT models. The result was the Court finding that the model’s ability to output these lyrics meant they were “memorized” in the model weights. This finding established two separate acts of reproduction:
- The “memorization” in the model parameters was deemed a form of “fixation” under copyright law, constituting a reproduction in the model itself.
- The subsequent display of the lyrics to the user via the platform constituted an independent reproduction in the output.
The Court explicitly rejected OpenAI’s reliance on the Text and Data Mining (TDM) exception. The Court reasoned that TDM exceptions cover only reproductions necessary for the analytical purpose of data mining, and cannot be used to justify the permanent embodiment of works in the trained model.
We believe that this ruling poses a severe challenge for AI developers operating in Germany, suggesting that if a model can reproduce training data (memorization), the TDM exception will not apply. It places a heavy burden on developers to filter training data or implement guardrails to prevent the reproduction of protected works.
US
Since our last client update covering the landmark ruling in Bartz v. Anthropic, the saga has recently reached a historic conclusion.
In late September 2025, Judge William Alsup approved a $1.5 billion settlement between Anthropic and a class of authors – the largest copyright settlement in US history. This settlement specifically addresses the court’s finding that while training on lawfully acquired books could be fair use, the use of pirated datasets constitutes infringement.
Under the terms of the settlement, Anthropic will pay approximately $3,000 per title for books sourced from these pirate libraries; it is also required to destroy the infringing datasets. This massive payout reinforces the critical lesson from the Bartz decision: while the fair use door remains open for legitimate training, the use of pirated sources carries substantial risk.
Note on US Regulatory Development: In May 2025, the U.S. Copyright Office published a pre-publication version of its final report on Generative AI Training (Part 3), which has not yet been formally published. This unusual delay is widely attributed to institutional leadership changes following its initial release. Though the pre-publication version is not yet formally binding, its influential findings – which generally advocate for the rights of copyright holders – are expected to shape future legal discussions regarding the scope of fair use for AI training. However, due to the political turmoil within the Copyright Office, it remains uncertain whether these findings will be kept unchanged in the final, official publication.
Herzog will continue to monitor the status of this critical report and provide updates upon its formal release.


