Federated Multimodal Deep Learning Framework for Privacy-Preserving Predictive Analytics in U.S. Healthcare Systems
Main Article Content
Abstract
The spread of electronic health records (EHRs), medical imaging, and clinical text data in the U.S. healthcare systems has provided unexplored opportunities of predictive analytics. Nevertheless, the privacy scenario is quite strict due to the health insurance portability and accountability act (HIPAA), which puts a heavy limit on the aggregate possibilities between institutions due to the sensitive information of patients. In this paper, we suggest a Federated Multimodal Deep Learning (FedMM-DL) architecture that can support privacy-aware predictive analytics in distributed healthcare systems. The suggested framework retimely combines structured EHR data, medical imaging (X-ray, CT, MRI) with unstructured clinical notes to compose an attention based multimodal fusion process in a federated learning paradigm. We use the political privacy and the secure aggregation to be sure that no raw patient data will be leak out of the local institution. Large-scale experimentation of four clinical prediction problems disease prediction, hospital readmission, mortality risk estimation and treatment response prediction prove the claim that FedMM-DL results in an AUC-ROC of 0.964, when compared to centralized single-modal techniques, and has strong privacy guarantees (e = 1.0). We have shown that the proposed framework can perform similar to the behavior of a fully centralized model with 1.2% of its performance without data locality violations, leading to a new privacy-conscious healthcare AI paradigm.