TY - GEN
T1 - Benchmarking the Robustness of Segmentation Methods Against Adversarial Attacks in Breast Ultrasound Segmentation
AU - Ahmed, Maryam
AU - Loja, Joanna
AU - Huang, Kuan
AU - Xu, Meng
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - This study presents a comprehensive robustness analysis of five segmentation models—UResNet, DeepLabV3, TransUnet, SAM, and Efficient SAM (ESAM)—for breast ultrasound (BUS) image segmentation under adversarial attacks. Each model is initially trained on a clean BUS dataset, followed by systematic evaluation against five widely-used adversarial techniques: FGSM, BIM, PGD, PGDL2, and Jitter. Model performance is quantitatively evaluated using tumor IoU, background IoU, and mean IoU metrics on both clean and adversarial data. Experimental results show that CNN-based models, such as UResNet and DeepLabV3, were more resilient to adversarial perturbations, maintaining higher accuracy compared to transformer-based models like TransUnet, SAM, and the lightweight ESAM, which exhibited significant vulnerability. These findings emphasize the importance of robustness evaluations in medical imaging and other high-stakes applications, where performance degradation can result in serious consequences. This study highlights the need for developing more robust models and effective defense strategies to enhance the reliability of medical image segmentation systems in clinical applications.
AB - This study presents a comprehensive robustness analysis of five segmentation models—UResNet, DeepLabV3, TransUnet, SAM, and Efficient SAM (ESAM)—for breast ultrasound (BUS) image segmentation under adversarial attacks. Each model is initially trained on a clean BUS dataset, followed by systematic evaluation against five widely-used adversarial techniques: FGSM, BIM, PGD, PGDL2, and Jitter. Model performance is quantitatively evaluated using tumor IoU, background IoU, and mean IoU metrics on both clean and adversarial data. Experimental results show that CNN-based models, such as UResNet and DeepLabV3, were more resilient to adversarial perturbations, maintaining higher accuracy compared to transformer-based models like TransUnet, SAM, and the lightweight ESAM, which exhibited significant vulnerability. These findings emphasize the importance of robustness evaluations in medical imaging and other high-stakes applications, where performance degradation can result in serious consequences. This study highlights the need for developing more robust models and effective defense strategies to enhance the reliability of medical image segmentation systems in clinical applications.
KW - Adversarial Attacks
KW - Deep Learning
KW - Medical Image Segmentation
KW - Robustness Analysis
UR - https://www.scopus.com/pages/publications/105014331298
U2 - 10.1007/978-3-031-94962-3_17
DO - 10.1007/978-3-031-94962-3_17
M3 - Conference contribution
AN - SCOPUS:105014331298
SN - 9783031949616
T3 - Communications in Computer and Information Science
SP - 188
EP - 201
BT - Computational Science and Computational Intelligence - 11th International Conference, CSCI 2024, Proceedings
A2 - Arabnia, Hamid R.
A2 - Deligiannidis, Leonidas
A2 - Shenavarmasouleh, Farzan
A2 - Amirian, Soheyla
A2 - Ghareh Mohammadi, Farid
PB - Springer Science and Business Media Deutschland GmbH
T2 - 11th International Conference on Computational Science and Computational Intelligence, CSCI 2024
Y2 - 11 December 2024 through 13 December 2024
ER -