Of editorial processes, AI models, and medical literature: the Magnetic Resonance Audiometry experiment.
Artificial Intelligence
Bibliometrics
Magnetic Resonance Imaging
Peer-review
Journal
European radiology
ISSN: 1432-1084
Titre abrégé: Eur Radiol
Pays: Germany
ID NLM: 9114774
Informations de publication
Date de publication:
07 Mar 2024
07 Mar 2024
Historique:
received:
24
11
2023
accepted:
01
02
2024
revised:
24
01
2024
medline:
7
3
2024
pubmed:
7
3
2024
entrez:
7
3
2024
Statut:
aheadofprint
Résumé
The potential of artificial intelligence (AI) in the field of medical research is unquestionable. Nevertheless, the scientific community has raised several concerns about a possible fraudulent use of these tools that might be used to generate inaccurate or, in extreme cases, erroneous messages that could find their way into the literature. In this experiment, we asked a generative AI program to write a technical report on a non-existing Magnetic Resonance Imaging technique called Magnetic Resonance Audiometry, receiving in return a full seemingly technically sound report, substantiated by equations and references. We have submitted this report to an international peer-reviewed indexed journal, passing the first round of review with only minor changes requested. With this experiment, we showed that the current peer-review system, already burdened by the overwhelming increase in number of publications, might be not ready to also handle the explosion of these techniques, showing the urgent need for the entire community to address both the issue of generative AI in scientific literature and probably a more profound discussion on the entire peer-review process. CLINICAL RELEVANCE STATEMENT: Generative AI models are shown to be able to create a full manuscript without any human intervention that can survive peer-review. Given the explosion of these techniques, a profound discussion on the entire peer-review process by the scientific community is mandatory. KEY POINTS: • The scientific community has raised several concerns about a possible fraudulent use of AI in scientific literature. • We asked a generative AI program to write a technical report on a non-existing technique, receiving in return a full technically sound report, substantiated by equations and references, that passed peer-review. • This experiment showed that the current peer-review system might be not ready to handle the explosion of generative AI techniques, advising for a profound discussion on the entire peer-review process.
Identifiants
pubmed: 38451324
doi: 10.1007/s00330-024-10668-w
pii: 10.1007/s00330-024-10668-w
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Informations de copyright
© 2024. The Author(s), under exclusive licence to European Society of Radiology.
Références
Biswas S (2023) ChatGPT and the future of medical writing. Radiology 307:e223312. https://doi.org/10.1148/radiol.223312
doi: 10.1148/radiol.223312
Conroy G (2023) Scientists used ChatGPT to generate an entire paper from scratch — but is it any good? Nature 619:443–444. https://doi.org/10.1038/d41586-023-02218-z
doi: 10.1038/d41586-023-02218-z
Biswas SS (2023) Role of Chat GPT in public health. Ann Biomed Eng 51:868–869. https://doi.org/10.1007/s10439-023-03172-7
doi: 10.1007/s10439-023-03172-7
Biswas SS (2023) Potential use of Chat GPT in global warming. Ann Biomed Eng 51:1126–1127. https://doi.org/10.1007/s10439-023-03171-8
doi: 10.1007/s10439-023-03171-8
Stokel-Walker C, Van Noorden R (2023) What ChatGPT and generative AI mean for science. Nature 614:214–216. https://doi.org/10.1038/d41586-023-00340-6
doi: 10.1038/d41586-023-00340-6
Bureau UC Census Bureau estimates show average one-way travel time to work rises to all-time high. In: Census.gov. https://www.census.gov/newsroom/press-releases/2021/one-way-travel-time-to-work-rises.html . Accessed 30 Jun 2023
Easter SS (1981) Alternative to peer review? Science 212:1337–1337. https://doi.org/10.1126/science.212.4501.1337
doi: 10.1126/science.212.4501.1337
el-Guebaly N, Foster J, Bahji A, Hellman M (2023) The critical role of peer reviewers: challenges and future steps. Nordisk Alkohol Nark 40:14–21. https://doi.org/10.1177/14550725221092862
doi: 10.1177/14550725221092862
Hanson MA, Barreiro PG, Crosetto P, Brockington D (2023) The strain on scientific publishing. https://doi.org/10.48550/arXiv.2309.15884
Candal-Pedreira C, Ross JS, Ruano-Ravina A et al (2022) Retracted papers originating from paper mills: cross sectional study. BMJ e071517. https://doi.org/10.1136/bmj-2022-071517
Grudniewicz A, Moher D, Cobey KD et al (2019) Predatory journals: no definition, no defence. Nature 576:210–212. https://doi.org/10.1038/d41586-019-03759-y
doi: 10.1038/d41586-019-03759-y
Gargaro S, Cigola M, Gallozzi A, Catuogno R (2019) Relationships between paper mills and technological evolution of paper production. In: Zhang B, Ceccarelli M (eds) Explorations in the history and heritage of machines and mechanisms. Springer International Publishing, Cham, pp 144–159
Aczel B, Szaszi B, Holcombe AO (2021) A billion-dollar donation: estimating the cost of researchers’ time spent on peer review. Res Integr Peer Rev 6:14. https://doi.org/10.1186/s41073-021-00118-2
doi: 10.1186/s41073-021-00118-2
pmcid: 8591820
LeBlanc AG, Barnes JD, Saunders TJ et al (2023) Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling. Res Integr Peer Rev 8:3. https://doi.org/10.1186/s41073-023-00128-2
doi: 10.1186/s41073-023-00128-2
pmcid: 10122980
Haustein S, Larivière V (2015) The use of bibliometrics for assessing research: possibilities, limitations and adverse effects. In: Welpe IM, Wollersheim J, Ringelhan S, Osterloh M (eds) Incentives and performance. Springer International Publishing, Cham, pp 121–139
doi: 10.1007/978-3-319-09785-5_8
Dadkhah M, Oermann MH, Hegedüs M et al (2023) Detection of fake papers in the era of artificial intelligence. Diagnosis 10:390–397. https://doi.org/10.1515/dx-2023-0090
doi: 10.1515/dx-2023-0090
Koller D, Beam A, Manrai A et al (2024) Why we support and encourage the use of large language models in NEJM AI submissions. NEJM AI 1. https://doi.org/10.1056/AIe2300128
Science Journals: Editorial Policies. In: Science. https://www.science.org/content/page/science-journals-editorial-policies . Accesssed 9 Jan 2024