Perplexity AI Sued Over Alleged Covert Data Sharing With Meta and Google
A class-action lawsuit accuses Perplexity AI of embedding hidden trackers that share user conversations with Meta and Google, even in Incognito mode.
A class-action lawsuit accuses Perplexity AI of embedding hidden trackers that share user conversations with Meta and Google, even in Incognito mode.
An AI Search Engine's Privacy Reckoning
Perplexity AI, one of the most prominent AI-powered search engines with over 15 million monthly active users, is facing a federal class-action lawsuit alleging it secretly shared user data with Meta and Google. The complaint, filed on April 1, 2026, in the U.S. District Court for the Northern District of California, accuses the company of embedding hidden tracking software that transmits users' private conversations to third parties without consent.
The case, Doe v. Perplexity AI Inc. (3:26-cv-02803), was brought by a Utah resident who claims he shared sensitive financial and tax information with the AI chatbot, not realizing that his conversations were being relayed to advertising platforms. The lawsuit also names Meta Platforms and Alphabet's Google as co-defendants.
How the Alleged Tracking Works
According to the complaint, trackers are downloaded onto users' devices the moment they log into Perplexity's home page. These embedded scripts allegedly grant Meta and Google full access to the conversations between users and Perplexity's AI search engine. The lawsuit describes the tracking technology as "undetectable" to ordinary users.
The most concerning allegation involves Perplexity's Incognito mode. The complaint states that user data is shared with third parties even when users specifically enable Incognito mode, a feature that implicitly promises enhanced privacy. This means users who believed they were conducting private searches may have had their queries and personal information transmitted to advertising networks regardless of their privacy settings.
The plaintiff alleges that this data sharing allows Meta and Google "to exploit this sensitive data for their own benefit, including targeting individuals with advertising and reselling their sensitive data to additional third parties." If substantiated, this would mean Perplexity's AI search engine effectively functioned as a data collection pipeline for two of the world's largest advertising companies.
Legal Claims and Privacy Law Violations
The lawsuit alleges violations of both federal and state computer privacy and fraud laws, including the California Invasion of Privacy Act (CIPA) and the federal Stored Communications Act. California's privacy laws are among the strictest in the United States, providing residents with significant protections against unauthorized data collection and sharing.
The class-action nature of the case means the plaintiff is seeking to represent all Perplexity users whose data may have been shared without consent. If certified as a class action, the case could potentially involve millions of users who relied on Perplexity's implicit and explicit privacy assurances.
Company Responses
Perplexity AI spokesperson Jesse Dwyer stated that the company "has not been served with any lawsuit that matches this description" and was therefore "unable to verify its existence or claims." This response neither confirmed nor denied the data-sharing allegations.
Google declined to comment on the lawsuit immediately. Meta pointed to a Facebook help page stating that it is against the company's rules for advertisers to send sensitive user information. However, the complaint argues that the tracking mechanism operates outside the traditional advertiser relationship, embedding directly into the Perplexity platform.
Context: A Pattern of Privacy Concerns in AI
The Perplexity lawsuit arrives at a moment when AI companies face mounting scrutiny over their data handling practices. Perplexity has already been embroiled in separate legal disputes, including a lawsuit from Amazon over allegations that the company scraped content from Amazon's platform without authorization.
More broadly, the AI industry has struggled with data governance. Several major AI companies have faced questions about whether user interactions with AI chatbots are used for model training, sold to third parties, or shared with partners. The Perplexity case adds a new dimension by alleging that data sharing happens through embedded third-party tracking scripts rather than through the company's own data policies.
This distinction matters because users who read and accept Perplexity's privacy policy may not have been aware that third-party trackers were independently collecting data outside the scope of that agreement. The lawsuit essentially argues that Perplexity created a dual data collection system: one governed by its stated policies, and another operating through hidden trackers that fed information directly to Meta and Google.
Implications for the AI Search Market
If the lawsuit's allegations prove true, the consequences could extend well beyond Perplexity. AI search engines have positioned themselves as privacy-conscious alternatives to traditional search, where the business model revolves around advertising and data monetization. Perplexity, in particular, has marketed itself on the quality and directness of its AI-powered answers, implicitly contrasting itself with Google's ad-driven search results.
The allegation that Perplexity was simultaneously feeding user data back to Google undermines this value proposition fundamentally. It raises questions about whether other AI search engines and chatbots might employ similar tracking mechanisms, and whether current privacy regulations are equipped to address the unique data flows created by AI-powered platforms.
For enterprise customers, the lawsuit introduces compliance risk. Organizations that have integrated Perplexity into workflows involving sensitive data, including legal research, financial analysis, and medical inquiries, may need to reassess their exposure if user queries were indeed transmitted to third-party advertising networks.
What Comes Next
The case is at its earliest stage, and Perplexity has not yet been formally served. If the lawsuit proceeds, discovery could reveal the technical architecture of Perplexity's tracking implementation and the nature of its data-sharing agreements with Meta and Google. Class certification will be a critical milestone, as it would determine whether the case represents individual claims or a sweeping challenge on behalf of the entire user base.
Regardless of the legal outcome, the lawsuit has already introduced significant reputational risk for Perplexity at a time when the company is competing aggressively for market share in the AI search space. Users and enterprises evaluating AI search tools now have a concrete reason to scrutinize privacy claims more carefully and demand transparency about third-party data flows.
Pros
- The lawsuit brings needed transparency to AI search engine data practices that users cannot independently verify
- Class-action format could establish legal precedents for AI platform privacy obligations
- Increased scrutiny may drive all AI search companies to improve their data governance and third-party tracking disclosures
- The case highlights the gap between AI platforms' privacy marketing and their actual technical implementations
Cons
- Allegations remain unproven, and Perplexity has not yet been formally served or given opportunity to respond substantively
- Technical details about the tracking mechanism are limited in the public complaint, making independent verification difficult
- The lawsuit could create uncertainty for users who depend on Perplexity as a productivity tool while the case is pending
- Legal proceedings may take years to resolve, during which the AI search market will have evolved significantly
References
Comments0
Key Features
1. Class-action lawsuit filed April 1, 2026, in U.S. District Court, Northern District of California (Case 3:26-cv-02803) 2. Alleges hidden trackers on Perplexity's homepage share user conversations with Meta and Google without consent 3. Data sharing reportedly occurs even when users enable Perplexity's Incognito mode 4. Plaintiff shared financial and tax information with the chatbot, which was allegedly transmitted to advertising networks 5. Lawsuit names Perplexity AI, Meta Platforms, and Alphabet's Google as co-defendants
Key Insights
- Embedded third-party trackers allegedly created a dual data collection system operating outside Perplexity's stated privacy policy
- The Incognito mode allegation is particularly damaging, as it suggests users who took active privacy steps were still tracked
- AI search engines have positioned themselves as privacy-conscious alternatives to Google, making data-sharing allegations especially corrosive to brand trust
- Enterprise customers using Perplexity for sensitive research may face compliance exposure if user queries were transmitted to advertising networks
- The case adds to Perplexity's existing legal challenges, including Amazon's scraping lawsuit, creating a pattern of data governance concerns
- If certified as a class action, the case could represent millions of users and set precedent for AI platform privacy obligations
- Current privacy regulations may not adequately address the unique data flows created by AI-powered search platforms with embedded third-party tracking
Was this review helpful?
Share
Related AI Reviews
Microsoft Copilot Now Uses Claude to Fact-Check GPT: Multi-Model Research Arrives
Microsoft 365 Copilot's new Critique feature pairs GPT and Claude in sequence, improving deep research accuracy by 13.8% on the DRACO benchmark.
Intercom Fin Apex 1.0: The Vertical AI Model That Beats GPT-5.4 and Claude
Intercom shipped Fin Apex 1.0, a domain-specific model achieving 73.1% resolution rate on customer support, beating GPT-5.4 and Claude Opus 4.5 while running faster and cheaper.
Shopify Agentic Storefronts Go Live: 5.6 Million Merchants Can Now Sell Inside ChatGPT and Gemini
Shopify launched Agentic Storefronts on March 24, 2026, enabling millions of merchants to sell products directly inside ChatGPT, Gemini, and Copilot conversations.
GitHub Copilot Will Train AI on Your Code by Default: How to Opt Out
GitHub updates its data policy to use Copilot Free, Pro, and Pro+ interaction data for AI model training unless users actively opt out before April 24, 2026.
