Google has recently announced a commitment to support users of its Google Cloud and Workspace platforms who may face accusations of intellectual property infringement related to generative artificial intelligence (AI) systems. This move aligns Google with other companies, such as Microsoft, Adobe, and more, which have also made similar assurances.
In a blog post, Google clarified that legal protection would be extended to customers using products integrated with generative AI capabilities. This protection covers seven specific products:
Duet AI in Workspace: Encompassing text generation in Google Docs and Gmail, as well as image generation in Google Slides and Google Meet.
- Duet AI in Google Cloud
- Vertex AI Search
- Vertex AI Conversation
- Vertex AI Text Embedding API
- Visual Captioning on Vertex AI
- Codey APIs
- Notably, Google’s Bard search tool is excluded from this legal protection.
Google’s approach to intellectual property indemnification involves a two-pronged strategy. First, the company extends protection to both the training data and the outcomes generated from its foundational AI models. This means that if legal action is taken against someone using Google’s training data that includes copyrighted material, Google will take responsibility for addressing the legal challenges.
The indemnity related to training data is not entirely new, but Google acknowledges that its customers desired clear and explicit confirmation that this protection also extends to scenarios involving copyrighted material in the training data.
Second, Google commits to protecting users if they face legal action due to the results they obtain while using Google’s foundational models. This includes situations where users generate content that resembles published works. However, this protection is contingent on users not intentionally using the AI to infringe upon the rights of others.
This move by Google mirrors similar statements and commitments made by other tech giants like Microsoft and Adobe, who have also pledged to assume legal responsibility for potential copyright issues related to their AI products. For instance, Microsoft committed to assuming responsibility for enterprise users of its Copilot products, and Adobe vowed to safeguard enterprise customers from copyright, privacy, and publicity rights claims when using Firefly.