I plugged the following prompt into ChatGPT: “write a four-sentence introduction to a blog post weighing the pros and cons of chat gpt and its implications for academia, including the impacts on academic integrity, how professors can detect plagiarism, and the potential for inaccurate results. ”
Here is ChatGPT’s result: “In the rapidly evolving landscape of academic discourse, the integration of ChatGPT into educational settings has sparked both excitement and concern. This blog post delves into the dichotomy of the pros and cons surrounding the use of ChatGPT in academia, scrutinizing its implications for academic integrity. As educators increasingly grapple with the challenge of detecting plagiarism in a world where advanced language models can generate content, we explore the tools and methods professors can employ to uphold academic honesty. Additionally, we scrutinize the potential for inaccuracies in results generated by ChatGPT, shedding light on the delicate balance between technological advancement and the reliability of academic inquiry.”
It doesn’t say exactly what I’d like it to, but I think that may be a good thing. We can see the almost confusing style of writing that ChatGPT uses with long, wordy sentences and 25 cent words where 5 cent words would be more accurate. We can also see how my voice does not come through in the writing, and we can see that ChatGPT does not churn out an entirely accurate interpretation of prompts. I plugged in “how professors can detect plagiarism,” which ChatGPT interpreted as “the tools and methods professors can employ to uphold academic honesty,” which is a similar concept but seems more scientific than I am in this blog post. Still, it’s a fairly decent piece of writing for a computer and a huge technological feat.
Certainly, as ChatGPT itself generated, there is “both excitement and concern” regarding AI, especially in regard to academia. In my experience as a student, I have used ChatGPT to summarize difficult articles to make sure I caught all the major points, only to find ChatGPT did not summarize the right article or just made something up entirely. I have also used ChatGPT to generate a list of relevant sources for me to look into, and then, after a very frustrating half hour searching for these sources, I realized ChatGPT literally generated them– They did not actually exist. However, ChatGPT has been exceptionally helpful in generating discussion questions for me to make sure I comprehended difficult articles, and it has also been helpful in my language learning endeavors as a kind of digital pen pal to write to in French. (I also think it generated a pretty good introduction paragraph for this blog post!)
In my experience as a professor, ChatGPT has been incredibly terrifying. As I’ve only been teaching for a few years, I don’t have the experience that other professors do to catch that kind of uncanny style of writing that ChatGPT uses, and there have been times when the phrasing in student essays were cyclical, repetitive, and strange, yet I cannot prove that ChatGPT (or another AI program) generated their essay. In “ChatGPT and the Rise of Generative AI: Threat to Academic Integrity,” Damian Okaibedi Eke notes that “for universities to preserve the current assessment methods based on written essays, there is a need to create a reliable tool that can detect AI generated texts. However, designing such a tool and incorporating it into effective or reliable assessment approaches will require a lot of funding and the support or buy-in of OpenAI or other creators of these language models. It may also take time to develop.” There are clearly these major restraints on us as educators and us as humans. Eke suggests rethinking essays as assessment anyway, but in freshmen writing courses, it seems difficult to do anything at all. Setting the learning agenda based on a student’s sample essays is absolutely useless if the essays are not a reflection of students’ abilities but of AI’s abilities.
Nina Begus works on developing technology and the humanities, and she said once (or probably many times, but I heard her say it once) that technology is developing rapidly whether the humanities are involved or not, so the humanities better get involved. That stuck with me. We can talk about how these forms of technology shouldn’t exist, but they do. So we had better come up with a way to face these technologies in a way that maintains integrity. I’m not sure how we can stress the importance of academic integrity to our students in a way that feels real when there is still no highly accurate way of detecting AI-written texts. Perhaps we could do what Val O’Bryan suggests: “instructors can have [students] use the tool at different stages of the writing process, stopping them along the way to discuss and think critically about the generated content and searching through databases to try to find credible sources that support the generated content.” This is an interesting idea, but I’m still not sure it would prevent undergraduate students with essays due in five courses from using an AI-generated essay when deadlines got tight. However, it seems to be a good idea to speak openly about AI in classrooms to help to explain the limitations to students.
Several tools have been created to try to catch AI-generated instances of plagiarism, so it is, hopefully, only a matter of time before these tools grow more and more accurate. In the meantime, it seems to me that the best we can do is aim for open communication about AI in our classrooms.