Treffer: A novel gradient inversion attack framework to investigate privacy vulnerabilities during retinal image-based federated learning.

Title:
A novel gradient inversion attack framework to investigate privacy vulnerabilities during retinal image-based federated learning.
Authors:
Nielsen C; Department of Radiology, University of Calgary, Calgary, AB, Canada; Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada. Electronic address: csnielse@ucalgary.ca., Wilms M; Department of Radiology, University of Michigan, Ann Arbor, United States., Forkert ND; Department of Radiology, University of Calgary, Calgary, AB, Canada; Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada; Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada; Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada.
Source:
Medical image analysis [Med Image Anal] 2026 Jan; Vol. 107 (Pt B), pp. 103807. Date of Electronic Publication: 2025 Sep 12.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Elsevier Country of Publication: Netherlands NLM ID: 9713490 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1361-8423 (Electronic) Linking ISSN: 13618415 NLM ISO Abbreviation: Med Image Anal Subsets: MEDLINE
Imprint Name(s):
Publication: Amsterdam : Elsevier
Original Publication: London : Oxford University Press, [1996-
Contributed Indexing:
Keywords: Federated learning; Gradient inversion attack; Machine learning; Retinal age prediction; Retinal fundus imaging
Entry Date(s):
Date Created: 20251007 Date Completed: 20251117 Latest Revision: 20251117
Update Code:
20251118
DOI:
10.1016/j.media.2025.103807
PMID:
41056812
Database:
MEDLINE

Weitere Informationen

Machine learning models trained on retinal images have shown great potential in diagnosing various diseases. However, effectively training these models, especially in resource-limited regions, is often impeded by a lack of diverse data. Federated learning (FL) offers a solution to this problem by utilizing distributed data across a network of clients to enhance the training dataset volume and diversity. Nonetheless, significant privacy concerns have been raised for this approach, notably due to gradient inversion attacks that could expose private patient data used during FL training. Therefore, it is crucial to assess the vulnerability of FL models to such attacks because privacy breaches may discourage data sharing, potentially impacting the models' generalizability and clinical relevance. To tackle this issue, we introduce a novel framework to evaluate the vulnerability of federated deep learning models trained using retinal images to gradient inversion attacks. Importantly, we demonstrate how publicly available data can be used to enhance the quality of reconstructed images through an innovative image-to-image translation technique. The effectiveness of the proposed method was measured by evaluating the similarity between real fundus images and the corresponding reconstructed images using three different convolutional neural network architectures: ResNet-18, VGG-16, and DenseNet-121. Experimental results for the task of retinal age prediction demonstrate that, across all models, over 92 % of the participants in the training set could be identified from their reconstructed retinal vessel structure alone. Furthermore, even with the implementation of differential privacy countermeasures, we show that substantial information can still be extracted from the reconstructed images. Therefore, this work underscores the urgent need for improved defensive strategies to safeguard patient privacy during federated learning.
(Copyright © 2025 The Authors. Published by Elsevier B.V. All rights reserved.)

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.