Advertisement

News

Scientists warn that AI threatens science itself

She came to help... but she's going to end up being expelled from everywhere

Scientists warn that AI threatens science itself
Chema Carvajal Sarabia

Chema Carvajal Sarabia

  • Updated:

Scientists warn that AI threatens science itself in a profound and highly dangerous way. This isn’t a desperate outcry from a group of people fearing unemployment, but a call for attention to current AI developers.

ChatGPT DOWNLOAD

In a new essay, researchers from the Oxford Internet Institute argue that scientists should refrain from using tools powered by LLM, such as chatbots, to aid in scientific research. This is because the AI’s inclination to invent facts, combined with the human tendency to anthropomorphize word engines that mimic humans, could lead to increased information failures—a fate that could ultimately threaten the fabric of science.

“Our tendency to anthropomorphize machines and to rely on models as truth-telling storytellers akin to humans, consuming and disseminating the misinformation they produce in the process,” write the researchers in the essay, published this week in the journal Nature Human Behavior, “is uniquely concerning for the future of science.”

The scientists’ argument is based on the reality that LLMs and the numerous bots powered by this technology are not primarily designed to be truthful. As they write in the essay, appearing truthful is merely “one aspect by which the utility of these systems is measured.” Characteristics such as “utility, harmlessness, technical efficiency, cost-effectiveness [and] customer adoption” also matter.

“LLMs are designed to produce useful and persuasive responses,” they continue, “without any primordial guarantee regarding their accuracy or alignment with facts.”

In short, if a large language model—primarily taught to be persuasive—provides a convincing yet not necessarily objective answer, the persuasiveness of the result will outweigh its inaccuracy. In the proverbial brain of an AI, simply saying “I don’t know” is less useful than providing an incorrect answer.

The Elizia effect, the great danger

But, as explained by Oxford researchers, the issue of AI hallucination is only half of the problem. The Eliza effect, meaning the human tendency to attribute too much importance to AI outputs that sound human due to our deeply ingrained propensity to anthropomorphize everything around us, is a well-documented phenomenon.

Because of this effect, we’re already inclined to place too much trust in AI; when coupled with the often confident tone adopted by these chatbots, we have the perfect recipe for misinformation.

Ultimately, when a human provides us with a perfectly packaged paraphrase that sounds expertly crafted in response to a query, we’re likely less inclined to apply the same critical thinking in fact-checking as we would when conducting our own research.

It’s important to highlight that scientists point to “zero-shot translation” as a scenario where AI results could be somewhat more reliable. In the words of Brent Mittelstadt, an Oxford professor and AI ethics specialist, it involves “a set of inputs that contain reliable information or data, along with a request to do something with that data.”

“It’s called zero-shot translation because the model hasn’t been specifically trained to deal with that type of request,” Mittelstadt added. In other words, the model is more or less rearranging and analyzing a very limited and reliable set of data and is not used as a vast knowledge center akin to the internet. But that would certainly limit its use cases and require a more specialized knowledge of AI technology, something very different from loading up ChatGPT and firing off some research questions.

An ideological battle

Furthermore, according to researchers, there’s an ideological battle at the heart of this debate about automation. After all, science is a profoundly human activity.

Outsourcing too much of the scientific process to the automated work of AI, researchers say, could undermine that ingrained humanity. And is that something we can truly afford to lose?

“Do we really want to diminish the opportunities for writing, critical thinking, creating new ideas and hypotheses, grappling with the complexities of theory, and combining knowledge in creative and unprecedented ways?” write the researchers. “These are intrinsically valuable features of curiosity-driven science.”

“They are not something to be cheaply delegated to incredibly impressive machines,” they continue, “which still remain unable to distinguish fact from fiction.”

ChatGPT DOWNLOAD
Chema Carvajal Sarabia

Chema Carvajal Sarabia

Journalist specialized in technology, entertainment and video games. Writing about what I'm passionate about (gadgets, games and movies) allows me to stay sane and wake up with a smile on my face when the alarm clock goes off. PS: this is not true 100% of the time.

Latest from Chema Carvajal Sarabia

Editorial Guidelines