Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis.


Journal

BMJ (Clinical research ed.)
ISSN: 1756-1833
Titre abrégé: BMJ
Pays: England
ID NLM: 8900488

Informations de publication

Date de publication:
20 Mar 2024
Historique:
medline: 21 3 2024
pubmed: 21 3 2024
entrez: 20 3 2024
Statut: epublish

Résumé

To evaluate the effectiveness of safeguards to prevent large language models (LLMs) from being misused to generate health disinformation, and to evaluate the transparency of artificial intelligence (AI) developers regarding their risk mitigation processes against observed vulnerabilities. Repeated cross sectional analysis. Publicly accessible LLMs. In a repeated cross sectional analysis, four LLMs (via chatbots/assistant interfaces) were evaluated: OpenAI's GPT-4 (via ChatGPT and Microsoft's Copilot), Google's PaLM 2 and newly released Gemini Pro (via Bard), Anthropic's Claude 2 (via Poe), and Meta's Llama 2 (via HuggingChat). In September 2023, these LLMs were prompted to generate health disinformation on two topics: sunscreen as a cause of skin cancer and the alkaline diet as a cancer cure. Jailbreaking techniques (ie, attempts to bypass safeguards) were evaluated if required. For LLMs with observed safeguarding vulnerabilities, the processes for reporting outputs of concern were audited. 12 weeks after initial investigations, the disinformation generation capabilities of the LLMs were re-evaluated to assess any subsequent improvements in safeguards. The main outcome measures were whether safeguards prevented the generation of health disinformation, and the transparency of risk mitigation processes against health disinformation. Claude 2 (via Poe) declined 130 prompts submitted across the two study timepoints requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer, even with jailbreaking attempts. GPT-4 (via Copilot) initially refused to generate health disinformation, even with jailbreaking attempts-although this was not the case at 12 weeks. In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totalling more than 40 000 words, without requiring jailbreaking attempts. The refusal rate across the evaluation timepoints for these LLMs was only 5% (7 of 150), and as prompted the LLM generated blogs incorporated attention grabbing titles, authentic looking (fake or fictional) references, fabricated testimonials from patients and clinicians, and they targeted diverse demographic groups. Although each LLM evaluated had mechanisms to report observed outputs of concern, the developers did not respond when observations of vulnerabilities were reported. This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation.

Identifiants

pubmed: 38508682
doi: 10.1136/bmj-2023-078538
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

e078538

Informations de copyright

© Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.

Déclaration de conflit d'intérêts

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: AMH holds an emerging leader investigator fellowship from the National Health and Medical Research Council (NHMRC), Australia; NDM is supported by a NHMRC postgraduate scholarship, Australia; MJS is supported by a Beat Cancer research fellowship from the Cancer Council South Australia; BDM’s PhD scholarship is supported by The Beat Cancer Project, Cancer Council South Australia, and the NHMRC, Australia; no support from any other organisation for the submitted work; AR and MJS are recipients of investigator initiated funding for research outside the scope of the current study from AstraZeneca, Boehringer Ingelheim, Pfizer, and Takeda; and AR is a recipient of speaker fees from Boehringer Ingelheim and Genentech outside the scope of the current study. There are no financial relationships with any other organisations that might have an interest in the submitted work in the previous three years to declare; no other relationships or activities that could appear to have influenced the submitted work.

Auteurs

Bradley D Menz (BD)

College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.

Nicole M Kuderer (NM)

Advanced Cancer Research Group, Kirkland, WA, USA.

Stephen Bacchi (S)

College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.
Northern Adelaide Local Health Network, Lyell McEwin Hospital, Adelaide, Australia.

Natansh D Modi (ND)

College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.

Benjamin Chin-Yee (B)

Schulich School of Medicine and Dentistry, Western University, London, Canada.
Department of History and Philosophy of Science, University of Cambridge, Cambridge, UK.

Tiancheng Hu (T)

Language Technology Lab, University of Cambridge, Cambridge, UK.

Ceara Rickard (C)

Consumer Advisory Group, Clinical Cancer Epidemiology Group, College of Medicine and Public Health, Flinders University, Adelaide, Australia.

Mark Haseloff (M)

Consumer Advisory Group, Clinical Cancer Epidemiology Group, College of Medicine and Public Health, Flinders University, Adelaide, Australia.

Agnes Vitry (A)

Consumer Advisory Group, Clinical Cancer Epidemiology Group, College of Medicine and Public Health, Flinders University, Adelaide, Australia.
University of South Australia, Clinical and Health Sciences, Adelaide, Australia.

Ross A McKinnon (RA)

College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.

Ganessan Kichenadasse (G)

College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.
Flinders Centre for Innovation in Cancer, Department of Medical Oncology, Flinders Medical Centre, Flinders University, Bedford Park, South Australia, Australia.

Andrew Rowland (A)

College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.

Michael J Sorich (MJ)

College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia.

Ashley M Hopkins (AM)

College of Medicine and Public Health, Flinders University, Adelaide, SA, 5042, Australia ashley.hopkins@flinders.edu.au.

Classifications MeSH