
- Meta AI had a critical privacy flaw affecting user-submitted prompts.
- Sandeep Hodkasia discovered and ethically reported the issue.
- Meta patched the bug quickly and awarded a $10,000 bounty.
- No evidence of misuse was found, but it raises serious concerns about AI data security.
Meta has resolved a serious security flaw in its generative AI platform, Meta AI, that could have allowed logged-in users to access private prompts and responses submitted by others — without their knowledge or consent.
The vulnerability, discovered by cybersecurity expert Sandeep Hodkasia, founder of AppSecure, triggered concern within the tech and privacy communities. Meta quickly patched the bug and awarded Hodkasia a $10,000 bug bounty for responsible disclosure.
How the Vulnerability Was Discovered
On December 26, 2024, Hodkasia was conducting a routine security test on the newly launched Meta AI chatbot, part of Meta’s broader push into the generative AI space. While investigating how Meta allows users to edit and regenerate prompts, he uncovered a flaw in how the backend handled user requests.
According to Hodkasia, each AI prompt submitted by a user is assigned a unique identifier, or ID number. While monitoring browser network traffic during the prompt editing process, he discovered that by manually changing the prompt ID, he could retrieve AI responses linked to other users.
“The prompt IDs were easily guessable. This could have been automated to access private conversations at scale.”
— Sandeep Hodkasia, AppSecure
Why This Bug Was Dangerous
The core issue was Meta's failure to verify whether a logged-in user had authorization to access a specific prompt ID. Without this critical check in place, anyone could manipulate request parameters and potentially see confidential user content.
In technical terms, the vulnerability was classified as an Insecure Direct Object Reference (IDOR) — a common yet dangerous access control flaw. If weaponized, it could have enabled mass scraping of user conversations, including sensitive or personally identifiable information (PII).
Fortunately, Hodkasia reported the bug directly to Meta via its bug bounty program before it could be exploited publicly.
Meta Responds and Patches the Bug
Meta confirmed the vulnerability in early January 2025 and deployed a fix by January 24, 2025. A company spokesperson, Ryan Daniels, shared the following statement:
“We appreciate the researcher’s responsible disclosure. We found no evidence of abuse and patched the issue quickly to protect our users.”
— Ryan Daniels, Meta spokesperson
As part of its bug bounty program, Meta awarded Hodkasia $10,000 USD — a standard reward for high-severity privacy risks that are reported ethically.
Why This Matters for AI Privacy
This incident comes at a time when AI tools like Meta AI, ChatGPT, and Google Gemini are seeing explosive global adoption. These platforms handle everything from casual chats to enterprise data — raising major concerns around privacy, consent, and ethical use.
Though Meta stated there was no known abuse, the fact that such a bug existed reinforces the need for robust access control systems and independent security testing for AI systems.
AI Privacy Challenges Are Growing
In the past six months alone:
- OpenAI has faced scrutiny over prompt data handling.
- Google’s AI search faced backlash for retaining user interactions.
- Meta AI's earlier releases were criticized for hallucinations and data misuse.
The potential for leaked or exposed prompts isn't just a security issue — it's a trust issue. And in the competitive race to dominate AI, trust could make or break a platform.
Expert View: Lessons from the Meta AI Bug
Cybersecurity experts say this incident should serve as a wake-up call for AI developers:
“AI systems are only as private as the architecture behind them. Every prompt, every interaction needs to be treated like personal data.”
— Rhea Talwar, Privacy Researcher, Global AI Ethics Forum
They argue that real-time privacy audits, token-based access validation, and end-to-end encryption should be baseline features — not afterthoughts.
Background: What Is Meta AI?
Meta AI, launched in September 2023, is Meta Platforms’ flagship AI assistant powered by the company’s Llama language models. It’s deeply integrated across Facebook, Instagram, WhatsApp, and Messenger. By mid-2024, Meta released a standalone AI app offering users advanced conversational tools, voice commands, image generation, and a personalized experience powered by contextual understanding.
This rapid rollout hasn’t been without controversy. Users and security experts have raised red flags about Meta AI’s Discover feed, which at one point exposed snippets of personal conversations—some including legal, medical, or sensitive prompts. Though Meta later introduced pop-up warnings and revised its privacy notices, the delay raised questions about the platform’s readiness for large-scale public use.
Under the hood, Meta AI relies on the Llama family of large language models. Llama 4, released in April 2025, powers most of the app’s generative capabilities. With investments in supercomputing clusters like Prometheus and Hyperion, Meta has signaled that generative AI is now central to its long-term strategy and infrastructure growth.
Want to see how Meta is bringing AI to everyday communication? Don’t miss our feature on WhatsApp’s latest AI integration: WhatsApp Just Got Smarter: Introducing AI Assistants.