What Happened
A class-action lawsuit was filed last week in California against Meta1. The suit claims Meta can access WhatsApp messages despite end-to-end encryption. It includes international plaintiffs from five countries2. It cites anonymous whistleblowers2. It describes internal systems that allegedly let employees request access to messages through a “task” mechanism3. The suit claims no decryption is needed for this access3.
The lawsuit also references claims from Ataullah Beg, described as WhatsApp’s former security head4. According to the suit, Beg discovered during internal testing that roughly 1,500 engineers had unrestricted access to user data, including sensitive personal information, with no audit trail4.
Separately, Bloomberg reported that U.S. law enforcement has been investigating similar allegations. Special agents with the Department of Commerce were examining claims by former Meta contractors that they and some Meta staff had “unfettered” access to WhatsApp messages. A related whistleblower complaint was also filed with the SEC in 20245. Neither the investigation nor the SEC complaint had been previously reported.
Those are the claims. I have no way to verify them.
What Was Said, and What Wasn’t
Meta called the allegations “categorically false and absurd” and a “frivolous work of fiction.”1 WhatsApp chief Will Cathcart called the lawsuit “a no-merit, headline-seeking lawsuit” and stated that WhatsApp can’t read messages because the encryption keys are stored on your phone6.
That is a factual statement about how the Signal protocol works. The keys are on your device. Messages in transit are encrypted using these keys. Messages at rest on Meta’s servers are also encrypted using these keys. WhatsApp’s implementation of the Signal protocol is well-regarded. Independent security researchers have audited it78.
This is also not the first time questions have been raised about what happens beyond the encryption layer. In 2021, a ProPublica investigation found that WhatsApp employed over 1,000 contract workers who reviewed millions of user messages weekly in unencrypted form, when users filed reports9. The encryption worked as designed. The access happened anyway, through a different path.
Here is what I noticed. The responses from Meta focused entirely on defending the encryption mechanism itself. No one from Meta or WhatsApp said anything along the lines of:
- We do not analyze your messages on your device.
- We do not extract inferences from your conversations.
- We do not use any information derived from your message content for profiling or advertising.
- We do not run machine learning models against your decrypted data.
The defense was about the encryption protocol. Not about what happens to data after it’s decrypted on your phone so you can read it.
I’m not saying these omissions are evidence of wrongdoing. Companies respond to specific allegations, and these specific points were not what the lawsuit alleged. But I think it’s worth noticing what’s being defended and what isn’t, because it tells you something about where the conversation is and isn’t happening.
A Jumping-Off Point
The lawsuit will play out in court. I have no idea what it will prove or disprove, and the claims about backdoors and employee access systems will be tested on their own terms.
I want to use it as a jumping-off point to revisit a different technical capability, one I’ve been talking about for years and have written about in two previous articles. This capability has nothing to do with the lawsuit’s specific claims, and it exists regardless of whether the lawsuit succeeds or fails. Even if Meta’s denial is entirely truthful, even if there is no encrypted backdoor of any kind, what I’m about to describe is still possible.
I am describing a capability, not accusing anyone of using it.
The Capability: On-Device Analysis of Encrypted Data
In Part 1 of this series (June 2025), I demonstrated how on-device machine learning models can analyze data that is protected by end-to-end encryption. The key insight is simple: your phone has to decrypt messages locally so you can read them. Once decrypted on your device, any app with access to that data can run ML models against it. The encryption is never broken. The data never leaves your device unencrypted. But the models extract insights, metadata, and inferences locally, and those results can be sent to a server.
I ran experiments on my own iPhone. A 17MB object detection model analyzed 1,000 photos in under 6 seconds. An image classifier processed 3,000 photos in 12 seconds. Both ran silently in the background. Neither required network access during analysis. Both could build a comprehensive profile.
In Part 2 (January 2026), I looked at what happens when those simple classifiers are replaced with full language models. Apple’s Foundation Models framework and Google’s Gemini Nano are roughly 3-billion-parameter LLMs that run entirely on your device and are available to any app developer. Apple’s framework is text-only for third-party apps, which makes it directly relevant to messaging. These models don’t just tag content. They can extract structured profiles from conversations, understand emotional context, and infer intent.
Part 2 also covered a detail that I think is underappreciated: with LLMs, a company doesn’t need to ship a new model to change what it extracts from your data. It just changes the prompt. A few kilobytes of text, delivered as a server-side configuration update. No app update needed. No App Store review. A cloud-based LLM could even generate follow-up prompts based on initial results, creating an iterative pipeline that goes deeper over time.
The conclusion from both articles: encryption protects data in transit and at rest. But your phone must decrypt that data locally to display it. Once decrypted, on-device models can analyze everything. This is not speculative. The frameworks exist. The hardware supports it. I demonstrated it works.
The Broader Point
This goes beyond WhatsApp and beyond this lawsuit. Every app that handles end-to-end encrypted data decrypts it on your device for display. That includes Signal, iMessage, Telegram, and others. Apple and Google ship ML frameworks with their operating systems. Any app can use them. The models run quietly and efficiently.
I have no evidence that any of these apps are doing on-device profiling. What I’m saying is that the capability exists, it is accessible to any developer, and there is currently no way for a user to know whether an app is running models against their decrypted data. There are no mandatory disclosures, no OS-level indicators, and no practical way to audit it.
The privacy labels on the App Store and Google Play are self-reported by developers. Apple and Google can’t practically verify what every app does on millions of devices. GDPR considers inferences about individuals to be personal data, but enforcement in the context of on-device processing is untested.
Where I Think the Conversation Should Be
I think the conversation about privacy in messaging is stuck at the encryption layer. People argue about whether encryption is broken. Companies defend encryption when challenged. But the more interesting question is what can happen to data that is properly encrypted in transit but decrypted on your device. That question gets almost no attention.
The capability to analyze your decrypted data on your own device, silently and efficiently, exists today. The hardware improves every year. The models get smaller and more capable. The frameworks become more accessible. That part isn’t alleged by any lawsuit. It’s engineering, and I’ve already demonstrated it.
There is a fundamental information asymmetry. These companies know exactly what their apps do on your device. You don’t. You can’t audit the code. You can’t inspect the models. You can’t see what inferences are being made. The rules are set up so that you never have all the information to make a fully informed decision about your own privacy. Being aware of what is technically possible is the most practical thing any of us can do. It won’t give you certainty. But it gives you a better basis for deciding where you put your data and how much trust you place in the apps that handle it.
The lawsuit will play out in court. Maybe discovery forces disclosure about how WhatsApp handles data on-device. Maybe the suit is dismissed. Either way, I hope the conversation moves past the encryption layer. Whether anyone is using this capability is a question of fact. That anyone could is not.
This is Part 3 of a series. Part 1 examined how on-device ML could profile users despite E2EE. Part 2 covered how foundation language models amplify those risks. This installment discusses the recent WhatsApp lawsuit and revisits why these technical capabilities matter.
This article was written by me, a human. I used an LLM-powered grammar checker for final review.
Sources
-
Lawsuit Claims Meta Can See WhatsApp Chats in Breach of Privacy - Bloomberg ↩︎ ↩︎
-
Dawson et al v. Meta Platforms, Inc. et al, Case No. 3:26-cv-00751 - Justia ↩︎ ↩︎
-
Breaking Down the WhatsApp Whistleblower Lawsuit - Tech Policy Press ↩︎ ↩︎
-
US Investigated Claims WhatsApp Chats Not Private - Bloomberg ↩︎
-
Lawsuit claims WhatsApp has a gaping security hole. Experts doubt it. - The Washington Post ↩︎
-
A Formal Security Analysis of the Signal Messaging Protocol - Cohn-Gordon et al., 2016 (IACR ePrint) ↩︎
-
Researchers find Signal approach is cryptographically sound - CyberScoop ↩︎
-
How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users - ProPublica, 2021 ↩︎