Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Currently submitted to: Journal of Medical Internet Research

Date Submitted: Jun 21, 2024
Open Peer Review Period: Jun 28, 2024 - Aug 23, 2024
(closed for review but you can still tweet)

NOTE: This is an unreviewed Preprint

Warning: This is a unreviewed preprint (What is a preprint?). Readers are warned that the document has not been peer-reviewed by expert/patient reviewers or an academic editor, may contain misleading claims, and is likely to undergo changes before final publication, if accepted, or may have been rejected/withdrawn (a note "no longer under consideration" will appear above).

Peer-review me: Readers with interest and expertise are encouraged to sign up as peer-reviewer, if the paper is within an open peer-review period (in this case, a "Peer-Review Me" button to sign up as reviewer is displayed above). All preprints currently open for review are listed here. Outside of the formal open peer-review period we encourage you to tweet about the preprint.

Citation: Please cite this preprint only for review purposes or for grant applications and CVs (if you are the author).

Final version: If our system detects a final peer-reviewed "version of record" (VoR) published in any journal, a link to that VoR will appear below. Readers are then encourage to cite the VoR instead of this preprint.

Settings: If you are the author, you can login and change the preprint display settings, but the preprint URL/DOI is supposed to be stable and citable, so it should not be removed once posted.

Submit: To post your own preprint, simply submit to any JMIR journal, and choose the appropriate settings to expose your submitted version as preprint.

Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.

ReproSchema: Enhancing Research Reproducibility through Standardized Survey Data Collection

  • Yibei Chen; 
  • Dorota Jarecka; 
  • Sanu Ann Abraham; 
  • Remi Gau; 
  • Evan Ng; 
  • Daniel M. Low; 
  • Isaac Bevers; 
  • Alistair Johnson; 
  • Anisha Keshavan; 
  • Arno Klein; 
  • Jon Clucas; 
  • Zaliqa Rosli; 
  • Steven M. Hodge; 
  • Janosch Linkersdörfer; 
  • Hauke Bartsch; 
  • Samir Das; 
  • Damien Fair; 
  • David Kennedy; 
  • Satrajit S. Ghosh

ABSTRACT

Background:

Inconsistencies in survey-type assessments (e.g., questionnaires) data collection across biomedical, clinical, behavioral, and social sciences pose challenges to research reproducibility. ReproSchema offers a schema-centric framework and comprehensive tools to standardize survey (e.g., assessment) design and facilitate reproducible data collection in multiple scenarios.

Objective:

This study illustrates ReproSchema’s impact on enhancing research reproducibility and reliability. We first introduce ReproSchema’s conceptual and practical foundations, then compare it against twelve platforms, assessing its contributions in resolving inconsistencies in data collection. Three use cases detail ReproSchema’s application in standardizing required mental health common data elements, tracking changes in longitudinal data collection, and creating interactive checklists for neuroimaging research.

Methods:

We describe ReproSchema’s foundation and practical implementation before selecting twelve platforms for comparison, including CEDAR, former, Kobo Toolbox, LORIS, MindLogger, OpenClinica, Pavlovia.org, PsyToolkit, Qualtrics, REDCap, SurveyCTO, SurveyMonkey. Our comparison focuses on adapted FAIR principles (i.e., Findability, Accessibility, Interoperability, and Reusability) and survey-platform-generic functions (i.e., shared assessment, multilingual, multimedia, validation, branch logic, scoring logic, adaptability, and non-code). We then present three use cases of survey design—NIMH-Minimal, the Adolescent Brain Cognitive Development (ABCD) and HEALthy Brain and Child Development Study (HBCD), and the Committee on Best Practices in Data Analysis and Sharing Checklist (eCOBIDAS)—to demonstration ReproSchema’s versatile applications.

Results:

ReproSchema standardizes survey-based data collection through a central schema and other synergistic components (e.g., a library of assessments, a toolkit for format conversion and schema validation, a user interface for data collection, and a template for multi-assessment research protocol creation). In the platform comparisons, ReproSchema is one of the few platforms that meet all criteria related to the adapted FAIR principles and six out of eight functionalities. Additionally, three use cases highlight ReproSchema’s effectiveness in streamlining data collection, enforcing version control, and facilitating data harmonization post-collection.

Conclusions:

ReproSchema contributes to reproducible data collection through the standardized creation and usage of assessments in diverse research settings while being equipped with the general functions of other survey platforms. ReproSchema’s existing limitations and plan enhancements, including ontology mappings and semantic search capabilities, demonstrate ongoing refinement in utility for the research community.


 Citation

Please cite as:

Chen Y, Jarecka D, Abraham SA, Gau R, Ng E, Low DM, Bevers I, Johnson A, Keshavan A, Klein A, Clucas J, Rosli Z, Hodge SM, Linkersdörfer J, Bartsch H, Das S, Fair D, Kennedy D, Ghosh SS

ReproSchema: Enhancing Research Reproducibility through Standardized Survey Data Collection

JMIR Preprints. 21/06/2024:63343

DOI: 10.2196/preprints.63343

URL: https://preprints.jmir.org/preprint/63343

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.

© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.