OpenAI Releases Guidance on 7 Default‑Off ChatGPT Settings to Enhance Privacy and Security
What Happened — OpenAI’s ChatGPT interface ships with several privacy‑ and usability‑related settings disabled by default. A ZDNet Security article (Mar 21 2026) outlines seven of these options—appearance tweaks, model selection, ad controls, memory/history toggles, and more—explaining how to enable them for a more secure, personalized experience.
Why It Matters for TPRM —
- Default‑off privacy controls can expose conversational data to unnecessary retention or profiling.
- Unchecked model selection may lead to higher cost or inadvertent use of less‑secure versions.
- Enabling ad‑control and history settings reduces attack surface for data leakage and improves compliance with data‑handling policies.
Who Is Affected — SaaS providers, enterprise users of generative AI, and any third‑party risk program that relies on OpenAI’s APIs (technology, finance, healthcare, education, etc.).
Recommended Actions —
- Review your organization’s OpenAI account settings; enable memory/history limits and ad‑personalization controls.
- Document the chosen model version and ensure it aligns with your security and cost policies.
- Incorporate these configuration checks into your vendor risk assessment checklist for AI services.
Technical Notes — The settings are accessed via the “Personalization” or “Settings” panels in the web or mobile UI. No CVEs are involved; the risk is operational—excessive data retention, inadvertent model downgrade, and exposure to targeted ads. Source: ZDNet Security article