Gauging Biases in Various Deep Learning AI Models

N. Tellez, J. Serra, Y. Kumar, J. J. Li, P. Morreale

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Scopus citations

Abstract

With the broader usage of Artificial Intelligence (AI) in all areas of our life, accountability of such systems is one of the most important topics for research. Trustworthiness of AI results require very detailed and careful validation of the applied algorithms, as some errors and biases could reside deeply inside AI components, which might affect inclusiveness, equity, justice and irreversibly influence human lives. It is critical to detect them and to reduce their negative effect on AI users. In this paper, we introduce a new approach to bias detection. Using the Deep Learning (DL) models as examples of a broader scope of AI systems, we make the models self-detective of the underlying defects and biases. Our system looks ‘under the hood’ of AI-model components layer by layer, treating the neurons as similarity estimators – as we claim the main indicator of hidden defects and bias. In this paper, we report on the result of applying our self-detection approach to a Transformer DL model, and its Detection Transformer object detection (DETR) framework, introduced by Facebook AI Research (FAIR) team in 2020. Our approach automatically measures the weights and biases of transformer encoding layers to identify and eventually mitigate the sources of bias. This paper focuses on the measurement and visualization of the weights and biases of the DETR-model layers. The outcome of this research will be our implementation of a modernistic Bias Testing and Mitigation platform. It will be open to the public to validate AI applications and mitigate their biases before their usage.

Original languageEnglish
Title of host publicationIntelligent Systems and Applications - Proceedings of the 2022 Intelligent Systems Conference IntelliSys Volume 3
EditorsKohei Arai
PublisherSpringer Science and Business Media Deutschland GmbH
Pages171-186
Number of pages16
ISBN (Print)9783031160745
DOIs
StatePublished - 2023
EventIntelligent Systems Conference, IntelliSys 2022 - Virtual, Online
Duration: 1 Sep 20222 Sep 2022

Publication series

NameLecture Notes in Networks and Systems
Volume544 LNNS
ISSN (Print)2367-3370
ISSN (Electronic)2367-3389

Conference

ConferenceIntelligent Systems Conference, IntelliSys 2022
CityVirtual, Online
Period1/09/222/09/22

Keywords

  • Artificial Intelligence (AI)
  • Bias convergence
  • Bias detection
  • Bias mitigation
  • Deep Learning (DL)
  • Detection Transformer (DETR)

Fingerprint

Dive into the research topics of 'Gauging Biases in Various Deep Learning AI Models'. Together they form a unique fingerprint.

Cite this