Governments Ban Under‑16 Social Media Access, Raising Data‑Privacy Risks for Children
What Happened – Several governments are moving to block users under 16 from mainstream social‑media platforms, mandating age‑verification checks that require personal identifiers such as government IDs or facial scans. The policy shift pushes verification to app stores or device OSes, dramatically expanding the amount of child‑specific data collected by tech ecosystems.
Why It Matters for TPRM –
- Age‑verification systems create new data‑processing pipelines that third‑party vendors (app‑store operators, identity‑verification services, and platform SDK providers) must integrate, increasing supply‑chain exposure.
- Breaches of these pipelines can expose minors’ immutable identifiers, fueling identity‑theft, fraud, and targeted harassment.
- Regulatory scrutiny is intensifying; non‑compliant vendors may face fines, reputational damage, and contract termination by enterprise customers.
Who Is Affected – Social‑media platforms (Meta, Discord, TikTok), app‑store operators (Apple App Store, Google Play), device‑OS vendors, identity‑verification providers, and any downstream developers that receive age‑band data.
Recommended Actions –
- Review contracts with any vendors that process age‑verification data for children.
- Verify that data‑minimisation, encryption‑at‑rest, and strict access‑controls are enforced.
- Conduct a privacy‑impact assessment (PIA) for any new child‑focused verification flows.
- Update incident‑response plans to include scenarios involving compromised minors’ PII.
Technical Notes – The proposed verification models rely on collection of government‑issued IDs, facial‑recognition templates, or birth‑date metadata. In 2025, Discord suffered a breach of a third‑party age‑appeal vendor, exposing ~70 k government‑ID photos. Such breaches illustrate the risk of “third‑party dependency” attack vectors where a single compromised supplier can leak sensitive child data. Source: Help Net Security