Research Shows Trust Gap Halts Agentic AI Production: Only 5 % of Enterprises Deploy AI Agents at Scale
What Happened — Cisco’s security research surveyed senior IT and security leaders and found that while 85 % of organizations are experimenting with or piloting agentic AI, only 5 % have deployed AI agents in broad production. The primary barrier is a lack of trusted security controls, with ≈ 60 % of security leaders citing security concerns as the main obstacle.
Why It Matters for TPRM —
- Unsecured AI agents can become a supply‑chain risk, exposing data and automating malicious actions.
- Vendors offering AI‑driven services may inherit these gaps, affecting downstream partners.
- Early guard‑rail implementation is a differentiator for resilient third‑party ecosystems.
Who Is Affected — Enterprises across all sectors (technology, finance, healthcare, etc.) that integrate autonomous AI agents into operational workflows; particularly vendors providing AI platforms, APIs, and cloud‑hosted services.
Recommended Actions —
- Conduct a security‑control gap assessment for any third‑party AI agents in use.
- Verify that vendors enforce strict access‑control policies, behavior‑monitoring, and data‑exfiltration safeguards.
- Require documented AI‑risk governance as part of third‑party risk questionnaires.
Technical Notes — The research highlights three core risk vectors: (1) Agent access control, (2) Potential data exfiltration, and (3) Unconstrained agent autonomy. No specific CVEs or malware were identified; the issue is systemic governance and control‑framework maturity. Source: Cisco Security Blog – The Agent Trust Gap