ChatGPT: Challenges to editors and examiners
DOI:
https://doi.org/10.3126/hprospect.v23i1.60819Keywords:
Artificial Intelligence, plagiarism, academic writing, scientific publishing, ChatgptAbstract
The past year saw an exponential growth in the use of machine learning using AI (artificial intelligence) and particularly Generative AI (GenAI) such as ChatGPT. The latter has seen a spectacular rise in the public debate and in the mass media. Those not involved in the development of AI were amazed by the capabilities of ChatGPT to produce text equal to the average human produced texts. There is no doubt that the adoption of AI is advancing rapidly.
To test the ability of ChatGPT in its free version, we posed simple questions about the topic we had previously published. After reading the short essay produced by ChatGPT we repeated the question whilst asking for references to be included. We were surprised by the quality of this very general piece of work.
In many UK universities there is a debate starting about students’ use of ChatGPT, and how difficult it is to distinguish between work produced by the average student and that produced by AI. There is a similar problem for editors and reviewers of academic journals. It really boils down to the question: 'How can you be certain the submitted manuscript came from a human source?’ However, we feel the progress of AI is not all doom and gloom. We outline some of the key problems around AI and academic publishing, but also opportunities arising from the use of AI in this area.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Health Prospect
This work is licensed under a Creative Commons Attribution 4.0 International License.