Automatic detection of cardiac conditions from photos of electrocardiogram captured by smartphones

 A novel approach of using objective detection and image segmentation models to automatically extract ECG waveforms from photos taken by clinicians was devised. Modular machine learning models were developed to sequentially perform waveform identification, gridline removal, and scale calibration. The extracted data were then analysed using a machine learning-based cardiac rhythm classifier.

Waveforms from 40 516 scanned and 444 photographed ECGs were automatically extracted. 12 828 of 13 258 (96.8%) scanned and 5399 of 5743 (94.0%) photographed waveforms were correctly cropped and labelled. 11 604 of 12 735 (91.1%) scanned and 5062 of 5752 (88.0%) photographed waveforms achieved successful voltage-time signal extraction after automatic gridline and background noise removal. In a proof-of-concept demonstration, an atrial fibrillation diagnostic algorithm achieved 91.3% sensitivity, 94.2% specificity, 95.6% positive predictive value, 88.6% negative predictive value and 93.4% F1 score, using photos of ECGs as input.

 

Object detection and image segmentation models allow automatic extraction of ECG signals from photos for downstream diagnostics. This novel pipeline circumvents the need for costly ECG hardware upgrades, thereby paving the way for large-scale implementation of machine learning-based diagnostic algorithms.

 

https://doi.org/10.1136/heartjnl-2023-323822