Google has initiated legal proceedings against three unidentified individuals, referred to as “DOES 1-3,” for their involvement in a deceptive scheme targeting users with a fraudulent version of Google’s AI chatbot Bard.
The lawsuit, filed on November 13, alleges that these scammers exploited Google’s trademarks, including “Google,” “Google AI,” and “Bard,” to create misleading advertisements.
According to Google, the scammers crafted deceptive social media pages and content, mimicking Google’s branding to entice unsuspecting users into downloading what they believed to be updates for Bard and other AI products. However, these downloads instead delivered malware onto users’ computers.
The malicious software, once installed, reportedly exploits users’ social media login credentials, with a particular focus on businesses and advertisers. Google asserts that users were duped into downloading the malware through carefully crafted links on these deceptive pages.
In response to these actions, Google is seeking legal remedies, including damages, reimbursement of attorney fees, permanent injunctive relief to address the harm caused by the defendants, recovery of all profits gained by the scammers, and a comprehensive restraining order. The company aims to address deceptive practices and safeguard users from falling victim to such schemes.
The lawsuit comes as AI services, including chatbot services, have seen a major increase in users worldwide. According to recent data, Google’s Bard bot gets 49.7 million unique visitors each month. OpenAI’s popular AI chatbot service, ChatGPT, has more than 100 million monthly users with nearly 1.5 billion monthly visitors to its website.
The lawsuit against the scammers is part of a broader trend where major players in the AI sector, such as Google, OpenAI, and Meta (parent company of Facebook and Instagram), find themselves entangled in legal battles. These legal challenges underscore the complex landscape surrounding AI technologies and the need for vigilant measures to protect users from deceptive practices.