Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Formative Research

Date Submitted: Mar 2, 2026
Open Peer Review Period: Mar 2, 2026 - Mar 2, 2026
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

Utilizing Large-Language Model for the Automatic Extraction of Clinical Course Information in Psychiatric Disorders

  • Chien-Hung Chen; 
  • Hong-Jie Dai; 
  • Chu-Hsien Su; 
  • Shi-Heng Wang; 
  • Yi-Ling Chien; 
  • Wei-Lieh Huang; 
  • Chi-Shin Wu; 
  • Hsin-Hsi Chen

ABSTRACT

Background:

Understanding the clinical course of psychiatric disorders is vital for informed decision-making. Extracting details such as onset time, episode count, and hospitalization history from unstructured psychiatric notes remains challenging.

Objective:

This study evaluates the performance of large language models (LLMs)—LLaMA, MentaLLaMA, OpenBioLLM, and Mistral—in extracting temporal and event-based clinical information from psychiatric discharge summaries.

Methods:

We used 500 annotated discharge summaries from the NTUH-iMD, covering psychiatric diagnoses (ICD-9-CM: 290–319, ICD-10-CM: F00–F99). Key temporal and clinical course features were annotated by a psychiatrist and an NLP researcher. A two-stage extraction process was implemented: first, sentence-level models identified clinical events and temporal cues; then, a chart-level model predicted four clinical course features: onset time, episode count, number of hospitalizations, and most recent hospitalization time. Performance was evaluated using F1-scores.

Results:

Among 12,947 analyzed sentences, 7,177 included clinical events and 4,842 contained temporal information. Mistral achieved the best performance in event extraction (episodes: 0.968; hospitalizations: 0.933; remission/response: 0.901) and temporal information (age: 0.968; time expressions: 0.967; duration: 0.901). In chart-level extraction, F1-scores were highest for Mistral in onset time (0.714), episode count (0.624), number of hospitalizations (0.676), and last hospitalization time (0.842).

Conclusions:

Fine-tuned LLMs, especially Mistral, can accurately extract structured clinical course information from psychiatric notes, offering a scalable alternative to manual review. Future work should address vague temporal expressions, expand feature sets, and validate generalizability across diverse settings. Clinical Trial: None


 Citation

Please cite as:

Chen CH, Dai HJ, Su CH, Wang SH, Chien YL, Huang WL, Wu CS, Chen HH

Utilizing Large-Language Model for the Automatic Extraction of Clinical Course Information in Psychiatric Disorders

JMIR Preprints. 02/03/2026:94454

DOI: 10.2196/preprints.94454

URL: https://preprints.jmir.org/preprint/94454

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.