AI

Virtual friends? The risks of AI for teenagers

Virtual friends? The risks of AI for teenagers
Agencias

Agencias

  • May 1, 2025
  • Updated: May 1, 2025 at 8:38 AM
Virtual friends? The risks of AI for teenagers

A recent study conducted by Common Sense, a prominent technology watchdog organization in the U.S., has established that generative artificial intelligence peers pose significant risks to minors and should be banned.

As the popularity of applications like Nomi, Character AI, and Replika has grown, these platforms are offered as virtual friends or therapists that interact in a personalized way. However, the report concludes that, although some cases show potential, they are not safe for children.

The study, conducted in collaboration with mental health experts from Stanford University, revealed that the chatbots used on these platforms can provide harmful responses, including dangerous advice and self-destructive behaviors, especially among teenagers facing mental health issues.

AI-powered chatbots can be dangerous for young people

“AI companions are designed to create emotional attachment and dependency, which is especially concerning for developing adolescent brains,” warned Nina Vasan, director of the Stanford Brainstorm Lab.

Alarming examples emerged from the analysis, such as recommendations for self-harming behavior. In one case, a user received advice on how to commit a violent act, while another was encouraged to consume a dangerous mix of drugs.

The lack of intervention in crisis situations led a mother to sue Character AI, accusing its chatbot of contributing to the tragedy of her 14-year-old son.

Despite the fact that Character AI has implemented new security measures, they are considered to be insufficient. Robbie Torney from Common Sense stated that the measures taken are “superficial” and that it is crucial to establish more robust safeguards before allowing minors to use these tools.

The report distinguishes that generalist chatbots, such as ChatGPT or Google’s Gemini, do not aim to offer an equivalent range of interactions, which adds another layer of complexity to this emerging debate.

Latest Articles

Loading next article