Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: JMIR Medical Informatics

Date Submitted: Apr 16, 2026
Open Peer Review Period: Apr 27, 2026 - Apr 27, 2026
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

A Recursive Learning Architecture for Zero-Shot Automated Clinical Coding, a methodological study

  • Natalia Castaño Villegas; 
  • Raul Escandon; 
  • Katherine Monsalve; 
  • Jose Zea; 
  • Laura Velasquez

ABSTRACT

Background:

Automated clinical coding with large language models has shown promise, but most approaches depend on supervised fine-tuning, static label spaces, or opaque prediction mechanisms that are difficult to audit and update. These limitations are particularly relevant in ICD-10-CM coding, where models must navigate complex documentation patterns, ambiguity, and evolving coding rules. Recursive learning architectures may offer an alternative by enabling systems to improve through explicit natural-language memory rather than parameter updates.

Objective:

This study evaluated whether a recursive learning architecture with an externalized Learning File could improve zero-shot ICD-10-CM coding performance on discharge summaries, while preserving interpretability and enabling analysis of longitudinal learning dynamics.

Methods:

We developed PANDORA, a zero-shot coding system composed of a Coder, a Reviewer, and a persistent natural-language Learning File derived from prior coding errors. Using discharge summaries from MIMIC-IV and a Top-50 ICD-10-CM benchmark, we compared a no-memory baseline (Phase 1) against a memory-augmented configuration (Phase 4). Performance was assessed across 20 recursive training iterations and on a held-out testing set of 500 cases, using micro-F1, macro-F1, precision, and recall at both exact-code and ICD-3 levels. Error composition, representative memory-guided decisions, and temporal degradation associated with memory growth were also analyzed.

Results:

In the held-out testing set, the memory-augmented system improved exact-code micro-F1 from 0.307 to 0.527 and precision from 0.203 to 0.515, while recall decreased from 0.630 to 0.540. At the ICD-3 level, micro-F1 improved from 0.372 to 0.560. Across training iterations, the memory-augmented condition achieved an exact-code micro-F1 of 0.605 versus 0.318 in the no-memory baseline. Gains were driven primarily by large reductions in false positives, indicating that the Learning File improved precision more than recall. A qualitative review showed that the system used accumulated rules to suppress unsupported codes and to recover context-sensitive diagnoses. However, performance declined after iteration 10 as the Learning File grew larger and less discriminative, suggesting that memory bloat is an important failure mode of recursive learning.

Conclusions:

A recursive learning architecture with explicit natural-language memory substantially improved zero-shot ICD-10-CM coding performance, primarily through better precision and more controlled code assignment. The approach offers transparency benefits because improvements can be traced to human-readable learned rules rather than hidden parameter changes. However, recursive systems require active memory governance, as unchecked rule accumulation may degrade performance over time. These findings support memory-based adaptation as a promising direction for interpretable clinical coding systems and other high-stakes clinical NLP tasks.


 Citation

Please cite as:

Castaño Villegas N, Escandon R, Monsalve K, Zea J, Velasquez L

A Recursive Learning Architecture for Zero-Shot Automated Clinical Coding, a methodological study

JMIR Preprints. 16/04/2026:98279

DOI: 10.2196/preprints.98279

URL: https://preprints.jmir.org/preprint/98279

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.