Google is facing a class-action lawsuit that alleges the company enabled its Gemini AI assistant inside key communication products without properly informing users.
The complaint, filed in the Northern District of California under Thele v. Google LLC, says the assistant gained access to Gmail, Google Chat and Google Meet data beginning in October.
According to the filing, the AI assistant became active by default, giving it the ability to process emails, attachments, chat messages and meeting information unless users manually disabled the setting.
The lawsuit argues that Google presented Gemini as an optional feature while keeping the opt-out control buried in menus, creating a situation where users were unaware their communication data was being analysed.
The plaintiffs say this violates the California Invasion of Privacy Act, which requires consent from all parties when a communication is recorded or intercepted. The suit describes Google’s approach as a form of unreasonable data access that users did not knowingly approve.
Google has not issued a detailed public response. The company typically states that privacy protection and user permission are central principles of its AI design, but it has not commented directly on the allegations in this case.
What The Complaint Claims
The lawsuit outlines how Gemini allegedly interacted with user data. It claims Gemini processed message histories, generated summaries and accessed content streams across Google’s communication services.
The filing states that Gemini performed these actions automatically and that Google did not provide a clear, upfront disclosure explaining how much data would be read or analysed.
The plaintiffs argue that historical expectations around Gmail and Chat created an assumption of private correspondence and that embedding AI without a transparent onboarding flow breaks that expectation.
They say the required disclosures were not shown in a clear location and that users had no meaningful way of understanding the scope of data processing.
Why The Case Matters
The dispute highlights a wider concern in the technology sector about the boundary between convenience-driven AI features and user expectations of privacy.
As companies embed large-language-model assistants more deeply into messaging, email and workflow tools, these systems can access information that people consider private. When activation occurs automatically, the question becomes whether users are giving real consent or simply being carried into new AI routines without understanding the implications.
For Google and its rivals, the outcome of this case could influence how AI features are introduced, how consent prompts are designed and how providers must explain data flows to users. Courts may examine whether AI-driven analysis within personal communication services should be treated differently from ordinary data processing.
What Happens Next
The next steps involve determining whether the case will proceed as a certified class action and whether Google chooses to fight the allegations or seek an early settlement.
Observers will watch for any adjustments Google makes to Gemini’s onboarding process, permission screens or data-handling disclosures. Regulatory bodies may also look more closely at how AI assistants operate inside communication ecosystems, especially when the default settings introduce data access at scale.
The lawsuit arrives at a time when tech companies are rapidly expanding AI capabilities. How this case unfolds may influence how those tools are deployed and what standards apply to user consent in the age of embedded AI.
