Study Finds CISOs Securing AI with Outdated Tools, Exposing Critical Risk Gaps
What Happened — A new Pentera “AI and Adversarial Testing Benchmark Report 2026” surveyed 300 U.S. CISOs and senior security leaders, revealing that most organizations protect AI workloads using legacy security tools and skill sets that are ill‑suited to modern AI threats. The report highlights pervasive skill shortages, insufficient AI‑specific controls, and a lack of dedicated governance frameworks.
Why It Matters for TPRM —
- Third‑party AI service providers may inherit the same skill gaps, increasing supply‑chain exposure.
- Inadequate AI security can lead to data poisoning, model theft, or covert inference attacks that compromise downstream customers.
- Vendors lacking AI‑focused controls may fail to meet contractual security clauses, raising compliance risk.
Who Is Affected — Enterprises across technology/SaaS, financial services, healthcare, retail, and government that integrate AI/ML models from external vendors.
Recommended Actions —
- Review all AI‑related third‑party contracts for explicit AI security requirements.
- Validate that vendors employ AI‑specific security controls (e.g., model integrity checks, adversarial testing).
- Augment internal teams with AI‑security expertise or engage specialized MSSPs.
Technical Notes — The study points to reliance on traditional firewalls, endpoint AV, and generic vulnerability scanners—tools that do not detect model‑level attacks such as data poisoning, model inversion, or adversarial examples. No specific CVEs are cited; the risk stems from a systemic skills and tooling gap. Source: The Hacker News