Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Mar 8, 2025
Date Accepted: Jun 17, 2025

The final, peer-reviewed published version of this preprint can be found here:

Using a Diverse Test Suite to Assess Large Language Models on Fast Health Care Interoperability Resources Knowledge: Comparative Analysis

Idrissi-Yaghir A, Arzideh K, Schäfer H, Eryilmaz B, Bahn M, Wen Y, Borys K, Hartmann E, Schmidt C, Pelka O, Haubold J, Friedrich CM, Nensa F, Hosch R

Using a Diverse Test Suite to Assess Large Language Models on Fast Health Care Interoperability Resources Knowledge: Comparative Analysis

J Med Internet Res 2025;27:e73540

DOI: 10.2196/73540

PMID: 40795315

PMCID: 12360669

Assessing Large Language Models on Fast Healthcare Interoperability Resources Knowledge: A Diverse Test Suite for Comparative Analysis

  • Ahmad Idrissi-Yaghir; 
  • Kamyar Arzideh; 
  • Henning Schäfer; 
  • Bahadir Eryilmaz; 
  • Mikel Bahn; 
  • Yutong Wen; 
  • Katarzyna Borys; 
  • Eva Hartmann; 
  • Cynthia Schmidt; 
  • Obioma Pelka; 
  • Johannes Haubold; 
  • Christoph M. Friedrich; 
  • Felix Nensa; 
  • René Hosch

ABSTRACT

Background:

Recent natural language processing (NLP) breakthroughs, particularly with the emergence of large language models (LLMs), have demonstrated remarkable capabilities on general knowledge benchmarks. However, there is limited data on the performance and understanding of these models in relation to the Fast Healthcare Interoperability Resources (FHIR) standard. Improving health data interoperability can greatly benefit the use of clinical data and interaction with electronic health records.

Objective:

This study aims to evaluate the capabilities of large language models (LLMs) in understanding and applying the Fast Healthcare Interoperability Resources (FHIR) standard.

Methods:

In this work, we introduce FHIR Workbench, a set of data sets designed to assess the knowledge and reasoning capabilities of LLMs on various FHIR-related tasks. These tasks range from multiple-choice questions on general FHIR concepts to the generation of FHIR resources from unstructured patient clinical notes. In addition, we evaluate both open-source and commercial LLMs on these tasks in a zero-shot setting.

Results:

Our evaluation across multiple FHIR tasks showed that commercial models, including GPT-4o and GPT-4.5-preview, delivered some of the highest F1 scores, often exceeding 0.90 on FHIR-QA, FHIR-RESTQA, and FHIR-ResourceID. Notably, GPT-4.5-preview reached 0.9067 on FHIR-QA but still trailed earlier GPT-4 versions on certain QnA benchmarks. Open-source systems like Qwen 2.5 Coder and Deepseek-v3 also remained highly competitive, surpassing 0.90 on multiple tasks. However, all models displayed lower performance on Note2FHIR, generally below 0.40, underscoring the difficulty of converting unstructured clinical text into FHIR-compliant resources.

Conclusions:

This study highlights the competitive performance of both open-source models, such as Qwen and Deepseek, and commercial models, such as GPT-4o and Gemini, in FHIR-related tasks. While open-source models are advancing rapidly, commercial models still have an advantage for certain complex tasks. The FHIR Workbench provides an exciting platform for evaluating the capabilities of these models and promoting improvements in health data interoperability.


 Citation

Please cite as:

Idrissi-Yaghir A, Arzideh K, Schäfer H, Eryilmaz B, Bahn M, Wen Y, Borys K, Hartmann E, Schmidt C, Pelka O, Haubold J, Friedrich CM, Nensa F, Hosch R

Using a Diverse Test Suite to Assess Large Language Models on Fast Health Care Interoperability Resources Knowledge: Comparative Analysis

J Med Internet Res 2025;27:e73540

DOI: 10.2196/73540

PMID: 40795315

PMCID: 12360669

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.