Meta Found Liable for Child Harm: $375 M Verdict Over Instagram & Facebook, $6 M LA Verdict Over Platform Addiction
What Happened — A New Mexico jury ordered Meta to pay $375 million for misleading parents about safety on Instagram and Facebook, finding the platforms deliberately pushed sexual content to minors. One day later, a Los Angeles jury held Meta (and Google) liable for designing “addiction machines,” awarding $6 million in damages to a plaintiff who alleged childhood addiction to the services.
Why It Matters for TPRM —
- Legal judgments expose vendors to massive financial liability and reputational damage, affecting downstream contracts.
- Demonstrates that algorithmic recommendation engines can be deemed unsafe for vulnerable users, prompting stricter oversight requirements.
- Highlights the need for third‑party risk programs to assess child‑safety, content‑moderation, and design‑ethics controls in SaaS/social‑media providers.
Who Is Affected — Social‑media SaaS platforms, digital‑advertising agencies, brands that run campaigns on Instagram/Facebook, and any organization that relies on Meta’s APIs for user engagement.
Recommended Actions —
- Review contracts with Meta‑related services for liability clauses, indemnification, and audit rights.
- Verify that your organization’s child‑safety and content‑moderation policies align with emerging regulatory expectations.
- Conduct a risk assessment of algorithmic recommendation exposure and consider alternative, lower‑risk channels for youth‑focused outreach.
Technical Notes — The cases focus on algorithmic content recommendation and platform design that amplified sexual imagery and fostered addictive usage patterns. No specific CVEs or malware were involved, but internal memos and engineering testimony revealed deliberate steering of minors toward explicit material. Source: Malwarebytes Labs