The Rise of Generative AI in Gaming and Its Legal Challenges (Part 2)

The first part of this series underscored the opportunities that generative AI (“GenAI”) presents for the gaming industry. It also detailed the unique issues that emerge in contract law. This second part delves into the effects of generative AI in games on intellectual property, platform law, and forthcoming AI regulations in the EU.


Read the first part in this series here.

How does GenAI in games impact Intellectual Property?

a) What do developers need to consider when creating their own GenAI Tools for Games?

When creating their own GenAI tools, developers must consider the legality of using training data like graphics (e.g., landscapes) or game elements (e.g., avatars, items), which might be copyrighted. Use of such components for training AI models typically generally copyright holder consent, often through licensing.

However, Art. 4 EU Directive 2019/790 could allow content reproduction for training if it’s legally accessible online without rights holder reservations. This exception may be useful when dealing with text and images, which are often freely available online in large quantities. However, the situation is different for detailed graphics or other game contents, which are not as widely available online. Thus, the combination of proprietary content and licensed content seems to be the more practical option for training AI models in many cases.

b) What are the key copyright issues when developers are using GenAI?

GenAI may help developers streamlining game development and creating gaming content like graphics and storylines. Using GenAI raises two key copyright issues: (1) content protectability and (2) liability risks.

AI-generated content might not be copyright protected if created without human oversight, as CJEU case law mandates the reflection of author’s personality in the work for copyright protection. With GenAI, AI’s control over the process and generation from multiple options could lead to copyright issues. If AI-generated elements like game characters are public domain, developers can use them but can’t prevent others from doing so as well. This highlights the importance of human involvement in creating significant game IPs for proper copyright protection. The more significant the IP, the more necessary human intervention becomes.

Additionally, AI-generated content must be scrutinized for potential infringements on other’s rights the same way as it has been done for human works. Examples of potential violations could include copying other game characters or using other copyrighted elements.

Picture generated by Dall-E

c) Can developers’ prompts be copyright protected?

Over time, developers might develop and reuse certain prompts. These prompts could become a valuable asset for the developers. If created by humans, these prompts can be copyright protected. This copyright protection allows the developers to prohibit third-party use or license the prompts to others. The requirements for this type of protection are not high: The prompt should meet the11-word guideline of the CJEU case law and should not be limited to pure technicalities and previously known prompts.

d) What needs to be considered when GenAI creates content in-game in real-time?

The situation becomes more complex when AI autonomously creates new content in-game and in real-time. GenAI could shape the game world by generating completely new non-player characters (NPCs), quests, game mechanics, storylines, or items. In this scenario, the AI essentially becomes the game’s director, controlling the narrative and the user experience. In this case, strict safeguards may help to prevent the AI-generated content from violating intellectual property (IP) rights and other regulations (e.g., youth protection laws). Real-time AI safeguards and platform-like real-time content moderation could be used to respond to potential violations.

e) Which safeguards are necessary when players integrate their AI generated content into games?

Developers should strategize for players integrating AI-generated content into their games, considering the upcoming Digital Services Act (DSA), effective February 17, 2024. The DSA regulates intermediary services storing third-party information, defining host providers and online platforms if they publicly communicate this information.

Under the DSA, if a developer allows “user-generated content” (UGC) and that function plays a significant role in the game, it must comply with the Notice-And-Action mechanism, comparable to the DMCA. This leads to two scenarios:

  • Players create AI-generated content outside the game and upload it. The developer must remove illegal content upon receiving a sufficiently substantiated notice, as with human-generated content.
  • Players use the developer’s or licensed partners’ AI tools in-game or in offline game editors. Prophylactic safeguards, like blocking specific prompts (e.g., pornographic, hatred incitement), are advisable in both cases. Furthermore, a careful AI and human supervised content moderation forms part of an overall IP-AI strategy for in-game AI generated content.
Picture generated by Dall-E

How do GenAI Integrations in Gaming align with Current Platform and AI Regulatory Frameworks?

a) What role does platform law play in AI-generated content in games?

Platform law is increasingly relevant in gaming due to long-standing trends towards a strong integration of UGC into games (see above). Beyond the notice-and-action mechanism, the DSA introduces several regulations that could impact games as online platforms start using GenAI:

  • Developers allowing UGC but limiting AI-generated content in their games may face transparency obligations in their terms and conditions (Art. 14 DSA).
  • Developers must take measures against players who repeatedly generate in infringing content with AI tools (Art. 23 DSA).
  • It remains to be seen if the EU Commission will issue guidelines on AI-based dark patterns, such as deep fakes or AI-generated ads that subtly influence player decisions (Art. 25 DSA).
  • AI-generated in-game advertising must also observe the transparency requirements under Art. 26 DSA (e.g., advertising labelling and indication of the parameters for the play out).
  • Effective AI safeguards to identify and keep developmentally harmful content away from children may play an important role in the future as effective youth protection measures for AI-generated content on platforms (Art. 28 DSA).

b) What effect will AI Regulation / the AI Act have on GenAI in games?

The forthcoming AI Act will apply to AI tools used in game development and those implemented in game mechanics.

Game developers will be classified as “deployers” – essentially commercial players of AI systems. While the non-generative AI systems used in games are unlikely to be classified as “high risk” or “prohibited” under the AI Act, they will not be completely exempt from regulation. Even the remaining “low-risk” systems will need to comply with the basic principles of the AI Act – such as human agency, transparency or fairness. Some GenAI tools will fall under the separate “foundation model” category and will have their own set of rules – which are currently discussed in the trilogue -, even for “deployers”.

Another transparency provision that is potentially relevant to the industry concerns so-called “deep fakes”. If GenAI is used to create a new image based on a real person (e.g., a player or a hired model) in-game, the person’s explicit consent is required. While the AI Act is still in the legislative process, developers using AI should consider its potential impact now in order to adapt their processes early.

The final part in this series will explore consumer and advertising considerations, as well as data protection issues relating tot he use of AI in Gaming. Please contact Simon Hembt or Oliver Belitz for further updates or advice in relation to this topic.

Leave a Reply