Back

Speaker "Synho Do" Details Back

 

Topic

Explainable AI for healthcare problems

Abstract

With the rapid progress of machine learning, deep-learning algorithms have the potential to change the medicine’s landscape. Specifically, advances in image recognition could increase diagnostic accuracy and speed, and enhance physician workflow. However, there are still obstacles hindering the translation of deep-learning systems into clinical environments. This includes the necessary access to large datasets from which to “train” machine-learning, which can be costly and time-consuming to accumulate. An additional obstacle is the inability for users to understand the algorithm’s decision-making process. For example, even if an algorithm correctly identifies a certain diagnosis, how can we understand its justification? 
 
In our research, we addressed both challenges by using a small dataset to construct an explainable, deep-learning algorithm for the image detection of acute intracranial haemorrhage (ICH). While using a small, imbalanced dataset of less than 1,000 images, we emphasized the standard of quality. Rather than having general radiologists simply label the presence or absence of ICH in each image, we recruited five specialty neuro-radiologists not only to label the presence of ICH, but also to label its specific subset out of five options.  Furthermore, we adjusted the system’s image processing to mimic radiologists’ own workflow. We found that even with a small dataset, enhancing the quality of the data and paralleling the algorithm’s processing to clinical work enabled a system performance similar to that of expert radiologists. Beyond optimizing performance, we made the algorithm explainable, having it create an atlas from the training set which in turn illuminated its decision-making. The “explainability” of an algorithm is essential not only for understanding the system’s predictions, but also for continuing improvement and optimization.
 
By providing a reliable, accurate second opinion in diagnosing brain hemorrhages, the implementation of this system has the potential to enhance patient care, empower patients and cut costs. The benefits of deep-learning systems extend beyond neuroradiology, and by constructing an explainable deep-learning algorithm from small datasets, our research helps address challenges traditionally hindering their implementation.

Profile

Dr. Synho Do is Director of the Laboratory of Medical Imaging and Computation (LMIC). He has an MS in Electrical Engineering (cryptosystem analysis) and a Ph.D. degree in Biomedical Engineering (nonlinear biological system analysis). As an NIH T32 fellow, Dr. Do received clinical training in the Cardiac MR PET CT program, He then built his team of scientists, clinicians, and mentors as an instructor at Massachusetts General Hospital, Harvard Medical School. He is currently an Assistant Professor of Radiology at Harvard Medical School and Assistant Medical Director for Advanced Health Technology Engineering, Research, and Development within the Massachusetts General Physicians Organization (MGPO). His research interests are healthcare data machine learning, high-performance computing, nonlinear system identification, complex system modeling, and clinical workflow understanding.