MONAI Label: A framework for AI-assisted interactive labeling of 3D medical images.

3D medical imaging Active learning Deep learning Interactive 3D image segmentation

Journal

Medical image analysis
ISSN: 1361-8423
Titre abrégé: Med Image Anal
Pays: Netherlands
ID NLM: 9713490

Informations de publication

Date de publication:
15 May 2024
Historique:
received: 12 01 2022
revised: 16 04 2024
accepted: 13 05 2024
medline: 23 5 2024
pubmed: 23 5 2024
entrez: 22 5 2024
Statut: aheadofprint

Résumé

The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models, considering that manual annotation is extremely expensive and time-consuming. To address this problem, we present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets. Through MONAI Label, researchers can develop AI annotation applications focusing on their domain of expertise. It allows researchers to readily deploy their apps as services, which can be made available to clinicians via their preferred user interface. Currently, MONAI Label readily supports locally installed (3D Slicer) and web-based (OHIF) frontends and offers two active learning strategies to facilitate and speed up the training of segmentation algorithms. MONAI Label allows researchers to make incremental improvements to their AI-based annotation application by making them available to other researchers and clinicians alike. Additionally, MONAI Label provides sample AI-based interactive and non-interactive labeling applications, that can be used directly off the shelf, as plug-and-play to any given dataset. Significant reduced annotation times using the interactive model can be observed on two public datasets.

Identifiants

pubmed: 38776843
pii: S1361-8415(24)00132-4
doi: 10.1016/j.media.2024.103207
pii:
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

103207

Informations de copyright

Copyright © 2024. Published by Elsevier B.V.

Déclaration de conflit d'intérêts

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Auteurs

Andres Diaz-Pinto (A)

School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK; NVIDIA Santa Clara, CA, USA. Electronic address: andres.diaz-pinto@kcl.ac.uk.

Sachidanand Alle (S)

NVIDIA Santa Clara, CA, USA.

Vishwesh Nath (V)

NVIDIA Santa Clara, CA, USA.

Yucheng Tang (Y)

NVIDIA Santa Clara, CA, USA.

Alvin Ihsani (A)

NVIDIA Santa Clara, CA, USA.

Muhammad Asad (M)

School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.

Fernando Pérez-García (F)

School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK; Department of Medical Physics and Biomedical Engineering, University College London, London, UK.

Pritesh Mehta (P)

School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK; Department of Medical Physics and Biomedical Engineering, University College London, London, UK.

Wenqi Li (W)

NVIDIA Santa Clara, CA, USA.

Mona Flores (M)

NVIDIA Santa Clara, CA, USA.

Holger R Roth (HR)

NVIDIA Santa Clara, CA, USA.

Tom Vercauteren (T)

School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.

Daguang Xu (D)

NVIDIA Santa Clara, CA, USA.

Prerna Dogra (P)

NVIDIA Santa Clara, CA, USA.

Sebastien Ourselin (S)

School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.

Andrew Feng (A)

NVIDIA Santa Clara, CA, USA.

M Jorge Cardoso (MJ)

School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.

Classifications MeSH