News

Many people use ChatGPT as a security advisor: the worst part is the risk they take doing so

Many users rely on ChatGPT for cybersecurity advice, but often share personal data in the process—unwittingly putting themselves at risk of fraud and identity theft.

Many people use ChatGPT as a security advisor: the worst part is the risk they take doing so
Avatar of Agencias

Agencias

  • June 26, 2025
  • Updated: July 1, 2025 at 9:17 PM
Many people use ChatGPT as a security advisor: the worst part is the risk they take doing so

As ChatGPT becomes more integrated into our daily routines, users are turning to it not only for productivity help, but also for cybersecurity advice. But while asking about phishing scams or the best VPN might seem harmless, there’s growing concern over how people are interacting with AI on sensitive topics. The problem? Many are inadvertently exposing their personal data in the process.

People ask ChatGPT about cybersecurity — but also share too much

According to research by NordVPN, users frequently ask ChatGPT about threats like phishing, personal data protection, or even smart device vulnerabilities. In theory, that’s positive: it shows people are trying to stay informed. However, the way users phrase their questions often includes sensitive information, such as email addresses, passwords, or even banking details.

This raises a serious issue. AI models like ChatGPT retain conversations to improve future responses, which means there’s a non-negligible risk that personal data becomes part of the model’s training set. While safeguards exist, researchers warn that well-crafted prompts from malicious actors might extract this information, exposing users to fraud, identity theft, or phishing.

Misunderstanding cybersecurity leaves users vulnerable

Beyond the risk of oversharing, some questions reveal a significant gap in basic cybersecurity understanding. Users ask whether whispering their password out loud or deleting a virus with a keypress is effective protection. This lack of digital literacy underscores a worrying truth: many users still don’t grasp the fundamentals of staying safe online.

As AI grows more influential, we need to rethink how we use it for sensitive topics. Until that happens, the line between seeking help and exposing ourselves will remain dangerously thin.

Latest Articles

Loading next article