TY - GEN
T1 - Gauging Biases in Various Deep Learning AI Models
AU - Tellez, N.
AU - Serra, J.
AU - Kumar, Y.
AU - Li, J. J.
AU - Morreale, P.
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - With the broader usage of Artificial Intelligence (AI) in all areas of our life, accountability of such systems is one of the most important topics for research. Trustworthiness of AI results require very detailed and careful validation of the applied algorithms, as some errors and biases could reside deeply inside AI components, which might affect inclusiveness, equity, justice and irreversibly influence human lives. It is critical to detect them and to reduce their negative effect on AI users. In this paper, we introduce a new approach to bias detection. Using the Deep Learning (DL) models as examples of a broader scope of AI systems, we make the models self-detective of the underlying defects and biases. Our system looks ‘under the hood’ of AI-model components layer by layer, treating the neurons as similarity estimators – as we claim the main indicator of hidden defects and bias. In this paper, we report on the result of applying our self-detection approach to a Transformer DL model, and its Detection Transformer object detection (DETR) framework, introduced by Facebook AI Research (FAIR) team in 2020. Our approach automatically measures the weights and biases of transformer encoding layers to identify and eventually mitigate the sources of bias. This paper focuses on the measurement and visualization of the weights and biases of the DETR-model layers. The outcome of this research will be our implementation of a modernistic Bias Testing and Mitigation platform. It will be open to the public to validate AI applications and mitigate their biases before their usage.
AB - With the broader usage of Artificial Intelligence (AI) in all areas of our life, accountability of such systems is one of the most important topics for research. Trustworthiness of AI results require very detailed and careful validation of the applied algorithms, as some errors and biases could reside deeply inside AI components, which might affect inclusiveness, equity, justice and irreversibly influence human lives. It is critical to detect them and to reduce their negative effect on AI users. In this paper, we introduce a new approach to bias detection. Using the Deep Learning (DL) models as examples of a broader scope of AI systems, we make the models self-detective of the underlying defects and biases. Our system looks ‘under the hood’ of AI-model components layer by layer, treating the neurons as similarity estimators – as we claim the main indicator of hidden defects and bias. In this paper, we report on the result of applying our self-detection approach to a Transformer DL model, and its Detection Transformer object detection (DETR) framework, introduced by Facebook AI Research (FAIR) team in 2020. Our approach automatically measures the weights and biases of transformer encoding layers to identify and eventually mitigate the sources of bias. This paper focuses on the measurement and visualization of the weights and biases of the DETR-model layers. The outcome of this research will be our implementation of a modernistic Bias Testing and Mitigation platform. It will be open to the public to validate AI applications and mitigate their biases before their usage.
KW - Artificial Intelligence (AI)
KW - Bias convergence
KW - Bias detection
KW - Bias mitigation
KW - Deep Learning (DL)
KW - Detection Transformer (DETR)
UR - http://www.scopus.com/inward/record.url?scp=85138235440&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-16075-2_11
DO - 10.1007/978-3-031-16075-2_11
M3 - Conference contribution
AN - SCOPUS:85138235440
SN - 9783031160745
T3 - Lecture Notes in Networks and Systems
SP - 171
EP - 186
BT - Intelligent Systems and Applications - Proceedings of the 2022 Intelligent Systems Conference IntelliSys Volume 3
A2 - Arai, Kohei
PB - Springer Science and Business Media Deutschland GmbH
T2 - Intelligent Systems Conference, IntelliSys 2022
Y2 - 1 September 2022 through 2 September 2022
ER -