Treffer: A novel gradient inversion attack framework to investigate privacy vulnerabilities during retinal image-based federated learning.
Original Publication: London : Oxford University Press, [1996-
Weitere Informationen
Machine learning models trained on retinal images have shown great potential in diagnosing various diseases. However, effectively training these models, especially in resource-limited regions, is often impeded by a lack of diverse data. Federated learning (FL) offers a solution to this problem by utilizing distributed data across a network of clients to enhance the training dataset volume and diversity. Nonetheless, significant privacy concerns have been raised for this approach, notably due to gradient inversion attacks that could expose private patient data used during FL training. Therefore, it is crucial to assess the vulnerability of FL models to such attacks because privacy breaches may discourage data sharing, potentially impacting the models' generalizability and clinical relevance. To tackle this issue, we introduce a novel framework to evaluate the vulnerability of federated deep learning models trained using retinal images to gradient inversion attacks. Importantly, we demonstrate how publicly available data can be used to enhance the quality of reconstructed images through an innovative image-to-image translation technique. The effectiveness of the proposed method was measured by evaluating the similarity between real fundus images and the corresponding reconstructed images using three different convolutional neural network architectures: ResNet-18, VGG-16, and DenseNet-121. Experimental results for the task of retinal age prediction demonstrate that, across all models, over 92 % of the participants in the training set could be identified from their reconstructed retinal vessel structure alone. Furthermore, even with the implementation of differential privacy countermeasures, we show that substantial information can still be extracted from the reconstructed images. Therefore, this work underscores the urgent need for improved defensive strategies to safeguard patient privacy during federated learning.
(Copyright © 2025 The Authors. Published by Elsevier B.V. All rights reserved.)
Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.