•  
  •  
 

Document Type

Article

Keywords

Lung cancer, Classification, Attention Mechanisms, Adversarial attacks, Transfer learning

Abstract

Deep learning–based classification of lung cancer from CT images can achieve high accuracy but is vulnerable to adversarial attacks that introduce imperceptible perturbations, potentially leading to misdiagnoses. Paying attention to neural networks in transfer learning could improve both the effectiveness and resistance to change. In this paper, we propose a hybrid framework that combines a MobileNetV2 backbone with channel–spatial attention modules and white‑box adversarial testing via the fast‑focused gradient sign method (FFGSM) and projected gradient descent under an L₂ norm constraint (PGDL₂). The model was trained end‑to‑end on a stratified CT dataset of 3,451 images (normal, benign, malignant) with adversarial examples injected during training (ε=4/255; PGDL₂: α=1/255, 7 steps). The evaluation of the performance was based on accuracy and precision, recall, the F1- score and the reduction in model performance due to adversarial attacks. For clean inputs, the attention‑augmented model achieved 86% accuracy (FFGSM) and 95% accuracy (PGDL₂), with balanced F1‑scores >0.90 across classes. Under adversarial attack, the accuracy decreases to 78% (FFGSM) and 86% (PGDL₂), indicating a smaller robustness drop for PGDL₂‑augmented training. Having attention modules in the model significantly enhanced the ability to discern features, saving up to 9% in performance compared with the models without attention. These methods made it clear that the model was more focused on clinically important parts. Incorporating hybrid channel–spatial attention into transfer‑learning pipelines substantially improve the accuracy and resilience of lung cancer CT classification to strong adversarial attacks. The results of our study can help guide the development of strong AI tools for examining medical images.

Share

COinS