AI

How to improve ChatGPT privacy: Stop your data from training OpenAI models

Learn how to protect your ChatGPT conversations by turning off data sharing for model training. Quick steps for both mobile and desktop platforms.

How to improve ChatGPT privacy: Stop your data from training OpenAI models
Avatar of Agencias

Agencias

  • April 12, 2025
  • Updated: July 1, 2025 at 9:53 PM
How to improve ChatGPT privacy: Stop your data from training OpenAI models

When we use ChatGPT, our conversations can be collected and used to train future AI models. By default, OpenAI enables this setting for every user. However, there is a simple way to prevent your messages from being included in future training. If you’re concerned about your privacy, we’ll explain how to disable this option on both desktop and mobile.

Change your data controls in the settings

To begin, open ChatGPT and access the Settings menu. On desktop, click your profile picture in the bottom-left corner and select “Settings”. On mobile, tap the side menu and then tap your name to open the settings screen.

Once inside Settings, go to the “Data Controls” section under the Account tab. On desktop, you’ll find a subsection called “Model Improvement”. On mobile, the setting appears directly under “Data Controls”.

There, look for the option called “Improve the model for everyone”. This is the setting that allows OpenAI to use your conversations to refine their AI. Simply turn this toggle off. Once disabled, OpenAI will no longer use your content to improve its models, offering you a greater level of privacy.

This change does not affect your access to ChatGPT’s features or functionality. It simply ensures that your prompts and responses won’t be collected for model training. If you value control over your data, this is a key setting to review and adjust.

By following these steps, you can keep your content private while continuing to use ChatGPT normally. It only takes a few seconds, and the difference in data protection is significant.

Latest Articles

Loading next article