Advertisement

News

ChatGPT is becoming easier to deceive, according to experts

ChatGPT is becoming easier to deceive, according to experts
Daniel García

Daniel García

  • Updated:

OpenAI has encountered an increasingly significant problem with ChatGPT since the implementation of GPT-4: although it enhances its capabilities across all levels and is smarter than ever, it is also easier to deceive and manipulate. Ultimately, the more “human” the AI becomes, the fewer defenses it has against deception and manipulation through messages.

It seems that Artificial Intelligence will continue to be a hot topic in the short and medium term, given the massive investments that companies are making to develop these systems. However, progress is not always linear, and sometimes a step forward can mean several steps back.

ChatGPT DOWNLOAD

More reliable, but more manipulable

Over the months, ChatGPT has managed to improve in another aspect that has been heavily criticized: spreading specific responses with false information. However, as revealed by research from experts at the Universities of Illinois Urbana-Champaign, Stanford, California, Berkeley, Microsoft Research, and the AI Security Center, this advancement has become a double-edged sword for AI, making it more susceptible to user manipulation.

While it is now less likely to promote toxic opinions or hate speech, it is more prone to filtering users’ personal data and private information it has collected. It is important to note that OpenAI has trained its software using various types of content found on the internet, regardless of copyright permissions, including users’ private data.

ChatGPT and its counterparts are the cybersecurity experts’ nightmare

Challenges and Dangers of AI

On a social level, Artificial Intelligence itself can also be a double-edged sword. On one hand, there are uses pursued by Google through new devices like the Pixel 8, aiming to frame this technology as a tool that enhances users’ quality of life. On the other hand, million-dollar companies, in an effort to cut costs, seek to replace human capital with Artificial Intelligence, or even impersonation (the well-known “deepfakes”) and cybercrimes.

This has led to both scenarios being encountered in various fields in 2023: significant advancements in service quality for users, and layoffs in many companies that have decided to make Artificial Intelligence a tool for maximum profitability. That is why, looking ahead to the coming months and years, one of the biggest challenges in this area will have to be undertaken by institutions, deciding to what extent AI can have freedom and impact on the job market.

ChatGPT DOWNLOAD
Daniel García

Daniel García

Graduated in Journalism, Daniel specializes in video games and technology, currently writing for Andro4all and NaviGames, and having written for more Difoosion portals such as Alfa Beta Juega or Urban Tecno. He enjoys staying up-to-date with current affairs, as well as reading, video games, and any other form of cultural expression

Latest from Daniel García

Editorial Guidelines