Bot or scientist? The controversial use of ChatGPT in science

For some, they are a threat. For others, an opportunity. Chatbots based on artificial intelligence hold centre stage in the international debate. In the meantime, top scientific journals announced new editorial policies that ban or curtail researchers from using them to write scientific papers.

They compose songs, paint pictures and write poems. In the last few years, artworks produced by artificial intelligence made giant strides, getting closer and closer to the mastery and sensitivity of human beings. Notably, neural nets achieved the ability to generate fluent language, churning out sentences that are increasingly hard to distinguish from text written by humans. Large language models such as the popular ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. The big worry in the research community –according to an editorial published in Nature – is that students and scientists could deceitfully pass off AI-written text as their own or simplistically use LLMs, such as to conduct an incomplete literature review) and produce work that is unreliable.

How does ChatGPT work

ChatGPT (Generative Pre-trained Transformer) is a chatbot launched by OpenAI – the same Californian industry of the popular Dall-E software, which generates digital images from natural language descriptions – in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models (LLM) and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. This complex neural net has been trained on a titanic data set of text. The New York Times Magazine reports that, in GPT-3’s case, “roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books”. GPT-3 is the most celebrated of the LLMs but not the first nor the only one. Its success depends on the fact that OpenAI has made the chatbot free to use and easily accessible for people who don’t have technical expertise. Millions are using it, most often for fun, growing excitement but also worries about these tools.

Editors’ position

Several preprints and published articles have already credited ChatGPT with formal authorship. In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63% of these fakes. “That’s a lot of AI-generated text that could find its way into the literature soon”, commented Holden Thorp, editor-in-chief of Science Journals, underlining that “our authors certify that they themselves are accountable for the research in the paper. Still, to make matters explicit, we are now updating our license and Editorial Policies to specify that text generated by ChatGPT (or any other AI tools) cannot be used in work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author”. In the last few weeks, the publishers of thousands of scientific journals – including the same Science – have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.

 

“An attribution of authorship carries accountability for the work, which cannot be effectively applied to LLMs” said Magdalena Skipper, editor-in-chief of Nature. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she added. “From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue. Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner” wrote the editorialist. 

Researchers should ask themselves how the transparency and trustworthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.

Elsevier, which publishes about 2,800 journals, including Cell and the Lancet, has taken a similar stance to Nature, according to Ian Sample, science editor of the Guardian. Its guidelines allow the use of AI tools “to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions” said Elsevier’s Andrew Davis, adding that authors must declare if and how they have used AI tools.

Distinguishing humans and AI

In response to the concerns of the scientific community, OpenAI announced in its blog that had trained a new classifier to distinguish between text written by a human and text written by AIs from a variety of providers: “While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that a human wrote AI-generated text: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”. By the company’s own admission, however, the classifier is not fully reliable. In internal evaluations on a “challenge set” of English texts, the classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written”, while incorrectly labelling the human-written text as AI-written 9% of the time (false positives).

Picture Credits CC from Focal Foto on Flickr

Share

Article

Innovation Inspired by Nature

Nature provides key benefits for resilience and can be the foundation for sustainability. It cleans the air and water, it provides food, beauty and serenity for people. The “green” realm provides immense benefits to societies, cities, and their citizens. Perspectives and solutions for creating cities that are resilient, sustainable, liveable and just for everyone.

Agricultural cornfield
Seeds

Food security

Available, accessible, safe. A set of tools and definitions to navigate the complex world of food security, a concept that has evolved constantly in the past decades, reflecting shifts in approach and point of view, and adapting to changes in international policy, environmental awareness, and scientific evidence.

Article

The Green Wave Takes Swiss Elections by Storm

On the 20th October 2019, the Swiss elections came to a close with a surprising outcome: Green parties gained an unprecedented share of the vote and for the first time could get a seat in the coalition that governs Switzerland. After advances by Green parties in the European parliamentary elections, climate change continues to play a key role in politics, as public opinion is ever more influenced by the effective communication of climate science.