Custom Font Rendering Trick Conceals Malicious Commands from AI Assistants
What Happened — Researchers demonstrated a proof‑of‑concept that uses specially crafted web fonts to display benign text to AI assistants while showing malicious shell commands to human users. The technique exploits the way many AI models render HTML, allowing attackers to hide command‑and‑control payloads from AI‑driven safety checks.
Why It Matters for TPRM —
- AI‑powered vendor tools (e.g., code review, security advice) may be misled, creating a blind spot for downstream customers.
- Supply‑chain risk rises when third‑party AI services cannot reliably parse content delivered by vendors’ web assets.
- Social‑engineering attacks can bypass existing AI‑based security controls, increasing the likelihood of endpoint compromise.
Who Is Affected — SaaS providers offering AI assistants, cloud‑based development platforms, and any organization that relies on AI for security or operational guidance.
Recommended Actions —
- Instruct users to copy‑paste exact commands into AI tools rather than relying on AI‑summarized page content.
- Deploy web‑filtering or browser extensions (e.g., Malwarebytes Browser Guard) that detect hidden font tricks.
- Engage AI vendors to confirm they have mitigations for CSS‑font rendering attacks.
Technical Notes — The attack combines custom @font‑face definitions with CSS to render one text layer for browsers and a different layer for AI parsers that ignore CSS styling. No CVE is associated; the vector is a novel social‑engineering/obfuscation technique. Data types involved are plain‑text shell commands. Source: https://www.malwarebytes.com/blog/news/2026/03/researchers-found-font-rendering-trick-to-hide-malicious-commands