The recent uproar over scientists’ use of AI, specifically the ChatGPT, to write academic publications has sparked outrage and concerns in academic culture, according to Futurism.
The case raises concerns about AI’s entry into academia, citing issues such as dishonest publishing, a corrupt admissions procedure, and an inauthentic economic model.
It has been discovered that AI-generated papers are not only disseminated in journals but also in reputable publications. Scholars have proven this occurrence on X and other social media platforms by including AI-generated statements such as “According to the information I had at the time” and “I do not have the most recent data” in Google Scholar searches.
Some of the journals being followed by concerned researchers may be predatory, while others, such as Surfaces and Interfaces, a reputable journal, may have published AI-generated content inadvertently.
For example, one publication demonstrating Bellingcat researcher Koltai’s work included behavioral indications of AI in its introduction, showing that editorial oversight was weak during the peer-review process.