False scientific abstracts and research papers produced by OpenAI’s sophisticated chatbox. Nearly one-third of the time, ChatGPT fooled scientists into believing its reports were genuine, according to a new study, raising questions about the future of artificial intelligence.
Researchers at Northwestern University and the University of Chicago instructed ChatGPT to generate false research abstracts based on 10 authentic ones published in medical journals and then fed the fakes to two detection programs that attempted to distinguish them from authentic reports.
In a blinded study, they also provided examiners with both fake and authentic papers and asked them to distinguish between the two.
These assessors correctly identified ChatGPT-created abstracts 68% of the time but misidentified 14% of authentic abstracts as fake.
Even though the phony papers were 100% original according to a program that detects plagiarism, only 8% adhered to the formatting standards required by scientific journals.
According to the reviewers, the fake papers were “vague and had a formulaic writing style,” but it was “surprisingly difficult” to distinguish between the real and false papers, according to the study.
Therefore, researchers urge scientific journals and medical conferences that rely on scientific studies to uphold “rigorous scientific standards” and employ AI output detectors when reviewing papers.
Catherine Gao, a researcher at Northwestern University, stated, “This is not someone reading an abstract in the wild,” adding, “The fact that our reviewers still missed the AI-generated ones 32% of the time indicates that these [generated] abstracts are not perfect.”
“Abstracts are excellent.”
OpenAI’s ChatGPT, an AI-powered chatbox — software designed to mimic human conversation — has been hailed as the most advanced chatbox ever and has raised eyebrows since its release in November, writing poems, telling stories that are “eerily good,” generating Python computer code, and writing college-level essays. The program, which is free to use, comes after companies such as Microsoft and Meta experimented with their own chatboxes, but failed to prevent the spread of bigoted and misogynistic language. Since its release, however, some critics have cautioned ChatGPT is too similar to human conversation and could be used by students to cheat on coursework and by businesses to replace programmers and journalists with artificial intelligence. The New York City Public School system prohibited it last week due to concerns about cheating.
Microsoft is reportedly nearing a $10 billion investment agreement that would value OpenAI at $29 billion and give Microsoft a 49% stake in the company as well as 75% of its profits until the massive investment is recovered. Reportedly, two additional venture capital firms, The Founders Fund, and Thrive Capital, have been in discussions to acquire $300 million worth of OpenAI shares from shareholders (OpenAI is not publicly traded).