Fake Claude AI Ads Exploit “Claudy Day” Flaws to Steal Data from Enterprise Users
What Happened — Researchers uncovered a set of vulnerabilities in Anthropic’s Claude AI, dubbed “Claudy Day,” that allow threat actors to embed malicious payloads in seemingly legitimate Google‑style ads displayed within the Claude interface. When users click these ads, hidden scripts can exfiltrate session tokens, API keys, and other sensitive data to attacker‑controlled servers.
Why It Matters for TPRM —
- The flaw targets SaaS‑based LLM platforms that many third‑party vendors embed in their products, expanding the attack surface beyond the AI provider.
- Data exfiltration can compromise downstream customers, leading to regulatory exposure and loss of intellectual property.
- The attack leverages trusted ad delivery channels, making detection difficult for traditional security controls.
Who Is Affected — Technology / SaaS firms that integrate Claude AI via API, cloud‑based productivity tools, and any organization that allows employees to interact with Claude in a web or embedded environment.
Recommended Actions —
- Review contracts and security questionnaires for Claude AI usage; confirm that ad‑rendering is disabled or sandboxed.
- Enforce strict content‑security policies (CSP) on pages embedding Claude to block unauthorized script execution.
- Conduct penetration testing focused on third‑party UI components and ad injection vectors.
- Monitor network traffic for anomalous outbound connections to unknown domains after Claude interactions.
Technical Notes — The vulnerability stems from improper validation of ad markup returned by Claude’s ad‑service endpoint, allowing HTML/JavaScript injection. No public CVE has been assigned yet; the issue is classified as a “logic flaw” rather than a code‑level bug. Exploited data includes API keys, session cookies, and any text entered into the Claude chat window. Source: HackRead