An online chatbot has been taking the scientific community by storm. However, some are concerned about the consequences of technology emerging before legislation can regulate it.
ChatGPT is an online chatbot that uses artificial intelligence (AI) to provide responses to user-input questions or summarize articles. ChatGPT is a large-language model that mimics speech patterns, resulting in responses that are realistic and sound like natural English. The technology comes from natural language processing, an area of artificial intelligence that works to make computers understand and produce readable text.
Google and Meta also have artificial intelligence based chatbots, but neither are as well-known as ChatGPT, which was released to the public late last year.
The chatbot was created by the company Open AI, which also made an AI art generator DALL-E. ChatGPT was released on October 30th and its popularity has skyrocketed ever since. In fact, the chatbot is so popular that when I tried to use it before writing this article, I was greeted with this message:

The chatbot gained over 1 million users in its first week and has been used for homework assignments, software coding, and even Hinge dates.
Scientists have also been using ChatGPT to help them write. Some authors use the chatbot to help make titles for their papers. Others input entire paragraphs and let the AI rewrite sections that were difficult to read.
Surprisingly, ChatGPT was listed as an author on several scientific papers:
In fact, the technology is so good that in several trials, scientists could not tell the difference between abstracts written by people or by the AI.
However, the use of ChatGPT to author scientific papers has been a topic of debate amongst scientists. Here are some of the main arguments for and against the use of ChatGPT:
The Good
ChatGPT can help level the playing field for authors who don’t speak English as their first language. Authors can show ChatGPT a paragraph they wrote and ask it to rewrite any parts that are unclear. ChatGPT can also catch grammatical errors or be used to add length to a paper.
The Bad
Students can use ChatGPT to write essays and pass off the assignment as their own work, concerning teachers about plagiarism. Even more concerning, ChatGPT can also provide incorrect answers. Users have reported multiple instances of the chatbot confidently providing incorrect answers and even failing to complete basic math problems. However, ChatGPT does offer a disclaimer: “I am not perfect and may not always have the correct answer to every question. Additionally, the information I provide is only as accurate as the data I have been trained on, which has a fixed cutoff date. This means that I may not be able to provide information on recent events or developments that have occurred since the training data was collected.”
What happens now?
The Fourteenth International Conference on Machine Learning has just announced that it has banned AI language tools, including ChatGPT, from authoring scientific papers.
“Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis”
Despite this announcement, some readers are still concerned about whether we can notice the difference between text written by people or written by AI. Earlier this month, a 22-year old student at Princeton University offered a solution. The student, Edward Tian, developed GPTZero, a software to help detect material written by artificial intelligence.
Scott Aranson, a researcher working on AI safety at ChatGPT’s parent company, OpenAI, stated that the company has been working on adding watermarks to GPT-written text. The goal is to create an unnoticeable signal that the text was written by ChatGPT.
Sources
- Bowman, E. (2023, January 9). A college student created an app that can tell whether Ai wrote an essay. NPR. Retrieved January 25, 2023, from https://www.npr.org/2023/01/09/1147549845/gptzero-ai-chatgpt-edward-tian-plagiarism
- ChatGPT Generative Pre-trained Transformer; Zhavoronkov A. Rapamycin in the context of Pascal’s Wager: generative pre-trained transformer perspective. Oncoscience. 2022 Dec 21;9:82-84. doi: 10.18632/oncoscience.571. PMID: 36589923; PMCID: PMC9796173.
- Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023 Jan;613(7944):423. doi: 10.1038/d41586-023-00056-7. PMID: 36635510.
- Gpt Generative Pretrained Transformer, Almira Osmanovic Thunström, Steinn Steingrimsson. Can GPT-3 write an academic paper on itself, with minimal human input?. 2022. ⟨hal-03701250⟩
- Hutson M. Could AI help you to write your next paper? Nature. 2022 Nov;611(7934):192-193. doi: 10.1038/d41586-022-03479-w. PMID: 36316468.
- ICML. (n.d.). Clarification on Large Language Model Policy LLM. ICML 2023. Retrieved January 25, 2023, from https://icml.cc/Conferences/2023/llm-policy
- Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models. medRxiv; 2022. DOI: 10.1101/2022.12.19.22283643.
- O’Connor S, ChatGPT. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ Pract. 2023 Jan;66:103537. doi: 10.1016/j.nepr.2022.103537. Epub 2022 Dec 16. PMID: 36549229.
- Stokel-Walker C. AI bot ChatGPT writes smart essays – should professors worry? Nature. 2022 Dec 9. doi: 10.1038/d41586-022-04397-7. Epub ahead of print. PMID: 36494443.
Pingback: AI Update: I Asked ChatGPT to Write This Article – SBU Pharm SciComm Society