Prompt Injection Threats Target Amazon Bedrock Multi‑Agent Applications
What Happened – Unit 42 demonstrated that adversaries can inject malicious prompts into Amazon Bedrock’s multi‑agent framework, allowing them to discover collaborator agents, deliver payloads, and execute unauthorized actions. The attacks succeed only when Bedrock’s Guardrails are mis‑configured; no underlying vulnerability in the service itself was found.
Why It Matters for TPRM –
- Multi‑agent AI services are increasingly adopted by SaaS vendors and internal development teams, expanding the third‑party attack surface.
- Prompt‑injection can lead to data leakage, unauthorized tool use, and downstream compromise of customer workloads.
- Proper configuration of built‑in Guardrails is essential; mis‑configurations are a common governance gap in third‑party risk programs.
Who Is Affected – Cloud‑based AI platform providers, SaaS vendors integrating Bedrock, enterprises using LLM‑driven workflows, and any third‑party that consumes Amazon Bedrock APIs.
Recommended Actions –
- Verify that all Amazon Bedrock deployments enforce the latest Guardrail policies.
- Incorporate prompt‑injection testing into third‑party security assessments and continuous monitoring.
- Require vendors to provide evidence of AI‑specific security controls (e.g., input sanitization, runtime monitoring).
Technical Notes – The attack chain leverages malicious prompt injection to manipulate agent instructions and tool schemas, ultimately invoking tools with attacker‑supplied inputs. No CVE was identified; the risk is mitigated by Bedrock’s Guardrail feature when correctly enabled. Source: Palo Alto Unit 42 – Amazon Bedrock Multi‑Agent Applications