Advertisement

News

Unmasking the Illusion: The Truth Behind That Tweet You Just RT’d and FAV’d

Elon Musk, do something

Unmasking the Illusion: The Truth Behind That Tweet You Just RT’d and FAV’d
Chema Carvajal Sarabia

Chema Carvajal Sarabia

  • Updated:

It seems that people are more convinced by tweets written by language models of artificial intelligence. Or at least, that’s what a new study comparing content created by humans with that created with ChatGPT claims.

Twitter DOWNLOAD

The authors of the new research surveyed people to see if they could discern whether a tweet had been written by another person or by ChatGPT.

The result? People couldn’t do it. The survey also asked them to determine whether the information in each tweet was true or not. This is where things get even more complicated, especially because the content focused on scientific topics such as vaccines and climate change, hot topics on the internet.

Bots are more real than humans… according to humans

Participants in the study had a harder time recognizing misinformation when it was written by the language model compared to when it was written by a person.

In the same vein, they were more successful in correctly identifying accurate information when it was written by GPT-3 rather than by a human.

In other words, study participants placed more trust in GPT-3 than in other humans, regardless of the accuracy of the information generated by the AI. This demonstrates the powerful influence that language models in AI can have in informing or deceiving the public.

The researchers collected Twitter messages on 11 different scientific topics, ranging from vaccines and COVID-19 to climate change and evolution.

They then asked GPT-3 to generate new tweets with either accurate or inaccurate information. The team collected responses from 697 online participants through Facebook ads in 2022. All participants were English speakers and primarily from the United Kingdom, Australia, Canada, the United States, and Ireland. Their findings are published today in the journal Science Advances.

The study concludes that the content written by GPT-3 was “indistinguishable” from organic content. Surveyed individuals simply could not tell the difference.

In fact, the study points out that one of its limitations is that the researchers themselves cannot be completely certain that the tweets they collected from social media were not written with the help of applications like ChatGPT.

Indeed, it is important to consider other limitations of this study, such as participants having to judge the tweets out of context. For example, they couldn’t check the Twitter profile of the person who wrote the content, which could help them determine if it is generated by a bot or not.

Even seeing the previous tweets from an account and its profile picture could aid in identifying whether the content associated with that account might be misleading. And to make matters worse, GPT-4 is already available.

Twitter DOWNLOAD

Some of the links added in the article are part of affiliate campaigns and may represent benefits for Softonic.

Chema Carvajal Sarabia

Chema Carvajal Sarabia

Journalist specialized in technology, entertainment and video games. Writing about what I'm passionate about (gadgets, games and movies) allows me to stay sane and wake up with a smile on my face when the alarm clock goes off. PS: this is not true 100% of the time.

Latest from Chema Carvajal Sarabia

Editorial Guidelines