Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: JMIR Cancer

Date Submitted: Dec 8, 2024
Date Accepted: Apr 29, 2025

The final, peer-reviewed published version of this preprint can be found here:

Assessing ChatGPT’s Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis

Richlitzki C, Mansoorian S, Käsmann L, Stoleriu MG, Kovacs J, Sienel W, Kauffmann-Guerrero D, Duell T, Schmidt-Hegemann NS, Belka C, Corradini S, Eze C

Assessing ChatGPT’s Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis

JMIR Cancer 2025;11:e69783

DOI: 10.2196/69783

PMID: 40802978

PMCID: 12349734

Assessing ChatGPT's educational potential in lung cancer radiotherapy: A readability, clinician, and patient evaluation

  • Cedric Richlitzki; 
  • Sina Mansoorian; 
  • Lukas Käsmann; 
  • Mircea Gabriel Stoleriu; 
  • Julia Kovacs; 
  • Wulf Sienel; 
  • Diego Kauffmann-Guerrero; 
  • Thomas Duell; 
  • Nina Sophie Schmidt-Hegemann; 
  • Claus Belka; 
  • Stefanie Corradini; 
  • Chukwuka Eze

ABSTRACT

Background:

Artificial intelligence models like ChatGPT have advanced significantly, with GPT-4o offering improved accuracy and contextual understanding. In healthcare, ChatGPT provides accessible explanations of complex medical concepts, aiding patient education and reducing clinician workload. It is particularly effective in simplifying medical jargon, addressing patient questions, and fostering engagement. However, limitations include misinformation risks, outdated data, and potential biases. For lung cancer, the leading cause of cancer-related deaths globally, patients require reliable, comprehensive, and accessible educational tools, particularly for complex treatments like radiotherapy. ChatGPT offers potential as a supplementary resource to meet these needs, though careful oversight is required to address its shortcomings.

Objective:

This study aims to evaluate the educational capabilities and limitations of GPT-4 for patients undergoing radiotherapy for lung cancer, focusing on clinician-led assessments of relevance, accuracy, and completeness, patient-led evaluations of educational content, and a readability analysis to assess response accessibility.

Methods:

Eight questions related to lung cancer radiotherapy were posed to GPT-4 (July 2024) via OpenAI’s web interface. Responses were assessed for readability using the Modified Flesch Reading Ease (FRE) Formula and the 4th Vienna Formula (WSTF). Six clinicians (two radiation oncologists, two medical oncologists, and two thoracic surgeons) experienced in the treatment of lung cancer rated relevance, correctness, and completeness on a five-point Likert scale (1 = strongly disagree, 5 = strongly agree). Patients evaluated comprehensibility, accuracy, relevance, trustworthiness, and willingness to use ChatGPT for future medical questions during post-radiotherapy follow-up. Data were analyzed using descriptive statistics (median, mean, standard deviation) in Microsoft Excel (version 2410). Figures were created in Python (version 3.8) using Matplotlib, with data structured in Pandas DataFrames for analysis and visualization.

Results:

ChatGPT's responses were classified as "very difficult" or "difficult to read" in the readability analysis using the Modified Flesch Reading Ease (FRE) and the 4th Vienna Formula (WSTF) (FRE: 23.36 ± 11.16, WSTF: 13.81 ± 2.01). Clinicians rated relevance (3.7–4.3), correctness (3.5–4.3), and completeness (3.5–4.2), with ChatGPT's response to the question "What follow-up care is required after radiotherapy for lung cancer?'' scoring highest across all dimensions. Thirty consecutive patients (48 – 87 years, median: 66 years) who received radiotherapy for lung cancer rated clarity highly ("easy to understand": 4.4 ± 0.61), but trustworthiness and usability scored lower ("confidence in information": 4.0 ± 0.84). These results highlight ChatGPT's strengths in accessibility and relevance, with room for improvement in trustworthiness and usability.

Conclusions:

ChatGPT shows promise as a supplementary tool for patient education in radiation oncology, offering clear and relevant information. However, limitations in completeness and trustworthiness necessitate careful review and supplementation by healthcare professionals. Further advancements and standardized evaluation criteria are essential for its effective integration into clinical practice.


 Citation

Please cite as:

Richlitzki C, Mansoorian S, Käsmann L, Stoleriu MG, Kovacs J, Sienel W, Kauffmann-Guerrero D, Duell T, Schmidt-Hegemann NS, Belka C, Corradini S, Eze C

Assessing ChatGPT’s Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis

JMIR Cancer 2025;11:e69783

DOI: 10.2196/69783

PMID: 40802978

PMCID: 12349734

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.