As tech companies grapple with the mounting legal costs associated with addressing tech scams, Google’s initiatives underscore the pressing need for concerted action to protect users in the ever-evolving landscape of online security.
In a significant move to protect its users and combat fraudulent activities, tech giant Google is suing two groups accused of leveraging its brand and services for nefarious purposes: one group for promoting malware-laden artificial intelligence (AI) chatbots and another for abusing copyright claims to take down competitor websites. Not only does this legal offensive highlight Google’s commitment to safeguarding online integrity, but it also emphasizes the substantial legal costs major tech firms must bear to ensure user security. Notably, Google reports scanning billions of apps for malware and blocking over 100 million phishing attempts every day. Moreover, the success of Google’s crackdown on these scammers could establish a precedent for other major tech companies to combat similar AI scams and allocate resources towards safeguarding their platforms, especially as technology continues to evolve, notes Halimah DeLaine Prado, Google’s general counsel.
Cracking down on scammers: Google’s legal offensive against AI deception
The first lawsuit — filed in federal court in California — seeks an order to halt the fraudsters from setting up fake profiles and requests the ability to disable the fraudulent pages with U.S. domain registrars, while also suing the perpetrators for trademark infringement and breach of contract. It alleges that three individuals in Vietnam, masquerading as Google, created fake ads for Google’s new AI chatbot, Bard. In particular, the scammers created social media pages and online ads that encouraged people to “download” Bard — which, in reality, is free and isn’t downloadable. “As public excitement in new generative AI tools has increased, scammers are increasingly taking advantage of unsuspecting users,” comments DeLaine Prado.
When users clicked on these ads, instead of accessing Bard, they unwittingly downloaded malware designed to steal their social media account login details. As such, Google seeks a court order to disable these fake pages and stop the scammers creating any more. The lawsuit specifically accuses the defendants of trademark infringement and breach of contract, particularly targeting small businesses and users with business and advertiser social media accounts. According to the lawsuit Google, “does not know the true names and capacities” of the defendants, who’re being sued “under fictitious names” — however, Google will use the defendant’s true names as soon as they come to light. Notably, Google initially traced the defendants through Google Drive links involved in the fraudulent activity, generated by users consenting to Google’s Terms of Service for Vietnam.
Google’s legal strategy: tackling online scams and mounting legal expenses
This legal move is part of Google’s “ongoing legal strategy to protect consumers and small businesses, and establish needed legal precedents in emerging fields of innovation,” DeLaine Prado explains. Notably, Google has so far initiated around 300 takedown requests in response to these deceptive advertisements since April 2023. Indeed, internet and phone scams are steadily increasing — in 2021, online scams resulted in Americans losing roughly $6.9 billion, the Federal Bureau of Investigation Internet Crime report for 2022 reveals. And, the following year, in 2023, that figure soared to more than $10.2 billion. AI-related scams, in particular, are now increasingly prevalent — for example, scams using deep fake technology to impersonate individuals or manipulate content in ways that deceive others into providing money or valuable assets under false pretenses. Amidst this escalating threat landscape, it’s therefore crucial people remain vigilant and take proactive steps to protect themselves from tech scams.
Moreover, legal expenses are set to increase as a result of AI-related scams, as verifying the authenticity of forged content may require additional expertise, potentially driving up costs for legal proceedings. Additionally, businesses that frequently convert images to PDF format may unknowingly make themselves vulnerable to image fraud facilitated by AI-generated forgeries. In turn, this highlights a potential risk factor for businesses in terms of legal liability and potential expenses related to addressing image fraud. Notably, while 82% of legal professionals think generative AI can potentially be used in legal work, only 51% endorse its use, with 24% opposed and 25% undecided, a Thomas Reuters survey found.
Combatting DMCA abuse
Google is also pursuing legal action against another group of scammers for abusing the Digital Millennium Copyright Act (DMCA). These scammers exploited the DMCA by submitting false orders against competitors, resulting in the removal of legitimate content. Using dozens of Google accounts, they submitted thousands of fake copyright claims against other companies, ultimately resulting in the elimination of over 100,000 business websites, “costing them millions of dollars and thousands of hours in lost employee time,” says DeLaine Prado. This devastating impact underscores the escalating legal costs borne by companies like Google in combating online exploitation and safeguarding the integrity of their platforms. Digital rights advocates are also warning about the increasing risk posed by DMCA abuse, further emphasizing the importance of Google’s broader legal strategy in protecting against such abuses.
“Just as AI fraudsters and copyright scammers hope to fly under the radar — we believe that appropriate legal action and working with government officials puts scammers squarely in the crosshairs of justice, promoting a safer internet for everyone,” DeLaine Prado notes. Indeed, as tech companies grapple with the mounting legal costs associated with addressing tech scams, Google’s initiatives underscore the pressing need for concerted action to protect users in the ever-evolving landscape of online security.
Join the conversation!