Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Accepted for/Published in: Journal of Medical Internet Research

Date Submitted: Oct 22, 2020
Date Accepted: Apr 14, 2021

The final, peer-reviewed published version of this preprint can be found here:

Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation

Helou S, Abou-Khalil V, Iacobucci R, El Helou E, Kiyono K

Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation

J Med Internet Res 2021;23(5):e25218

DOI: 10.2196/25218

PMID: 33970117

PMCID: 8145082

Computational Ethnography in Clinics: Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions

  • Samar Helou; 
  • Victoria Abou-Khalil; 
  • Riccardo Iacobucci; 
  • Elie El Helou; 
  • Ken Kiyono

ABSTRACT

Background:

The study of doctor-patient-computer interactions is a key research area for examining doctor-patient relationships. However, studying these interactions is costly and obtrusive as researchers usually set up complex mechanisms or intrude into consultations, collect data, and manually analyze it.

Objective:

We aim to facilitate human-computer and human-human interaction research in clinics by providing a computational ethnography tool: an unobtrusive automatic classifier of basic doctor-patient-computer interactions.

Methods:

The classifier’s input are videos taken by doctors using their computers' internal camera and microphone. By estimating the key points of the doctor's face and the presence of voice activity, we estimate the type of interaction that is taking place. The output is a classification of video segments into four interaction classes: doctor-patient-computer, doctor-patient, doctor-computer, and Other interactions.

Results:

We evaluated the classifier using 30 minutes of video provided by five doctors simulating consultations in their clinics both in semi and fully inclusive layouts. The classifier achieved an overall accuracy of 0.83, a performance similar to a human coder. In addition, similarly to the human coder, the classifier had a better accuracy in a fully-inclusive layout in comparison to a semi-inclusive layout.

Conclusions:

The proposed classifier can be used by researchers, care providers, designers, medical educators, and others who are interested in exploring and answering questions related to doctor-patient-computer interactions during consultations.


 Citation

Please cite as:

Helou S, Abou-Khalil V, Iacobucci R, El Helou E, Kiyono K

Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation

J Med Internet Res 2021;23(5):e25218

DOI: 10.2196/25218

PMID: 33970117

PMCID: 8145082

Download PDF


Request queued. Please wait while the file is being generated. It may take some time.