A picture of a rat, another showing a human leg with too many bones, suspicious wording… Recent examples confirm an increasingly important use of artificial intelligence (AI) in scientific publications, to the detriment of their quality. While specialists recognize the interest of tools like ChatGPT to help with content writing, particularly in terms of translation, several recent retractions by journals have highlighted the existence of dishonest practices.
Earlier this year, an illustration showing a rat with oversized genitals led to the withdrawal of a study published in a journal of the academic publishing house Frontiers, a heavyweight in the sector. Last month, another study was withdrawn after presenting an image of a human leg with… too many bones. Beyond these erroneous illustrations, the biggest upheaval in the sector seems to come from ChatGPT.
1% of production
A study published by the British scientific publishing group Elsevier went viral in March: its introduction began with « certainly, here is a possible introduction for your subject », a formula typical of ChatGPT responses. These embarrassing failures, having escaped the vigilance of experts, remain rare and would probably not have passed the control filters of the most prestigious journals. The use of AI is often difficult to detect, but seems to be clearly increasing in the scientific literature.
Andrew Gray, a librarian at University College London, has combed through millions of articles looking for words such as « meticulous », « complex », or « commendable », overused by AI. According to him, at least 60,000 articles would have been produced with AI in 2023, or 1% of production. The year 2024 should mark a « significant increase ». According to the American association Retraction Watch, AI now allows the « industrialization » of the production of « fake » papers by article « factories ». Actors produce many articles of poor quality, plagiarized or false.
Laisser un commentaire