Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Medical Informatics

Date Submitted: Jun 7, 2023
Open Peer Review Period: Jun 7, 2023 - Jun 22, 2023
Date Accepted: Oct 21, 2024
(closed for review but you can still tweet)

The final, peer-reviewed published version of this preprint can be found here:

Task-Specific Transformer-Based Language Models in Health Care: Scoping Review

Cho HN, Jun TJ, Kim YH, Kang HJ, Ahn I, Gwon H, Kim Y, Seo H, Choi H, Kim M, Han J, Kee G, Park S, Ko S

Task-Specific Transformer-Based Language Models in Health Care: Scoping Review

JMIR Med Inform 2024;12:e49724

DOI: 10.2196/49724

PMID: 39556827

PMCID: 11612605

Task-Specific Transformer-Based Language Models in Health Care: A Scoping Review

  • Ha Na Cho; 
  • Tae Joon Jun; 
  • Young-Hak Kim; 
  • Hee Jun Kang; 
  • Imjin Ahn; 
  • Hansle Gwon; 
  • Yunha Kim; 
  • Hyeram Seo; 
  • Heejung Choi; 
  • Minkyoung Kim; 
  • JiYe Han; 
  • Gaeun Kee; 
  • Seohyun Park; 
  • Soyoung Ko

ABSTRACT

Background:

In the field of artificial intelligence, language models, which are used to convey knowledge in the medical domain, have rapidly increased in number. However, no comprehensive review is available to guide researchers in constructing and applying language models for medical applications.

Objective:

We aim to leverage the power of these language models to improve healthcare by addressing the challenges in the six tasks we reviewed.

Methods:

We present potential solutions to the identified limitations to provide useful insights for future research in natural language processing and the development of language models for medical applications.

Results:

We surveyed studies on medical transformer-based language models, categorizing them into six tasks: dialogue generation, question-answering, summarization, text classification, sentiment analysis, and named entity recognition.

Conclusions:

By proposing potential solutions, we hope to facilitate the creation of more effective and accurate language models that can be utilized to enhance healthcare delivery and improve patient outcomes.


 Citation

Please cite as:

Cho HN, Jun TJ, Kim YH, Kang HJ, Ahn I, Gwon H, Kim Y, Seo H, Choi H, Kim M, Han J, Kee G, Park S, Ko S

Task-Specific Transformer-Based Language Models in Health Care: Scoping Review

JMIR Med Inform 2024;12:e49724

DOI: 10.2196/49724

PMID: 39556827

PMCID: 11612605

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.