•  
  •  
 

Document Type

Article

Keywords

Federated learning, privacy leakage, secure aggregation, differential privacy, hybrid defense, Electrocardiogram;, clinical text, multimodal data, healthcare AI

Abstract

Federated learning (FL) enables decentralized, privacy-preserving machine learning by training models across distributed data without sharing raw patient information. However, most FL frameworks focus on unimodal data and overlook critical challenges in multimodal healthcare settings, such as privacy risks, fairness disparities, and reduced model interpretability. We present PRIFLEX (Privacy-Resilient Integration Framework for Learning Exchange), a novel FL framework designed for secure integration of structured and unstructured medical data. PRIFLEX combines 12-lead electrocardiograms (ECG) from the PhysioNet PTB-XL dataset and clinical notes from the Medical Information Mart for Intensive Care IV (MIMIC-IV), supporting early, late, and hybrid data fusion strategies. To safeguard model updates, it evaluates standalone and hybrid defenses using Differential Privacy (DP) and Secure Aggregation (SA) against gradient leakage, model inversion, and membership inference attacks. Results show that early fusion improves the area under the Receiver Operating Characteristic curve (AUROC) by up to 6.2%, while hybrid DP+SA reduces attack success rates by up to 84% and improves fairness with manageable system overhead. PRIFLEX also quantifies interpretability loss using SHapley Additive exPlanations (SHAP) and gradient-based methods, highlighting the trade-off between privacy and transparency. Overall, PRIFLEX sets a new benchmark for building secure, fair, and explainable federated learning systems in healthcare.

Share

COinS