•  
  •  
 

Document Type

Article

Keywords

Security, Homomorphic, Deep Learning, Neural Network, Predication

Abstract

Deep learning has emerged as a powerful approach for treating complex real-world challenges. However, the performance of the deep learning models is heavily reliant on access to large volumes of high-quality training data—an aspect often constrained by privacy concerns. Ensuring data availability while preserving user confidentiality remains a pressing issue. In response, cryptographic techniques like homomorphic encryption (HE), which are grounded in strict mathematical principles, present hopeful solutions for securing data on digital platforms without compromising its usability for learning models. It performs computations on encrypted data without revealing the underlying plaintext. The main attractive feature of this technique is its ability to protect sensitive information in a variety of settings. Moreover, it guarantees the data’s trustworthiness and keeps data from being altered or tampered with. In this paper, sensitive data are encrypted via homomorphic algorithms and then input into deep learning to evaluate the feasibility of privacy-preserving deep learning. The aim is to examine the performance and security implications of fully homomorphic encryption (by the Learning With Errors (LWE) scheme) and partially homomorphic encryption (by the Rivest, Shamir, and Adleman (RSA) algorithms). Both methods were applied to three datasets for osteoporosis diagnosis. The experimental results show that LWE maintains high accuracy, reaching 88.01%, compared with unencrypted models. In contrast, RSA showed lower accuracy with minimum resource consumption. The findings showed that LWE is a more secure and reliable option for privacy-preserving deep learning in medical applications.

Share

COinS