Tenzai Warns AI‑Generated Code Increases Security Gaps; Agentic AI Testing Offers Scalable Protection
What Happened — At RSAC 2026, Tenzai co‑founder and CEO Pavel Gurvich warned that AI‑driven code generation is accelerating application delivery faster than traditional security testing can keep up, creating new, often‑undetected vulnerabilities. He promoted autonomous “agentic” AI testers that can evaluate source code, deployment configurations, and integration points at machine speed, providing continuous, parallel coverage.
Why It Matters for TPRM —
- AI‑generated code can introduce novel attack surfaces that legacy testing tools miss.
- Third‑party software suppliers adopting rapid AI‑assisted development may expose their customers to hidden risks.
- Scalable, AI‑native testing can become a required control for vendors handling critical applications.
Who Is Affected — Technology SaaS providers, cloud‑native development platforms, and any organization that outsources software development to AI‑augmented teams.
Recommended Actions —
- Review your vendors’ secure‑coding and testing practices for AI‑generated artifacts.
- Require evidence of continuous, automated security testing (e.g., agentic AI tools) as part of third‑party risk assessments.
- Update your security policies to include AI‑specific code review and configuration checks.
Technical Notes — The discussion focuses on the security implications of AI‑generated code and the use of autonomous AI agents for static, dynamic, and configuration testing. No specific CVEs or vulnerabilities were disclosed. Source: DataBreachToday