Robust Testing of AI Language Model Resiliency with Novel Adversarial Prompts

Brendan Hannon, Yulia Kumar, Dejaun Gayle, J. Jenny Li, Patricia Morreale

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

In the rapidly advancing field of Artificial Intelligence (AI), this study presents a critical evaluation of the resilience and cybersecurity efficacy of leading AI models, including ChatGPT-4, Bard, Claude, and Microsoft Copilot. Central to this research are innovative adversarial prompts designed to rigorously test the content moderation capabilities of these AI systems. This study introduces new adversarial tests and the Response Quality Score (RQS), a metric specifically developed to assess the nuances of AI responses. Additionally, the research spotlights FreedomGPT, an AI tool engineered to optimize the alignment between user intent and AI interpretation. The empirical results from this investigation are pivotal for assessing AI models’ current robustness and security. They highlight the necessity for ongoing development and meticulous testing to bolster AI defenses against various adversarial challenges. Notably, this study also delves into the ethical and societal implications of employing advanced “jailbreak” techniques in AI testing. The findings are significant for understanding AI vulnerabilities and formulating strategies to enhance AI technologies’ reliability and ethical soundness, paving the way for safer and more secure AI applications.

Original languageEnglish
Article number842
JournalElectronics (Switzerland)
Volume13
Issue number5
DOIs
StatePublished - Mar 2024

Keywords

  • AI model resilience
  • adversarial testing
  • content moderation in AI
  • cybersecurity in AI systems
  • ethical AI implications

Fingerprint

Dive into the research topics of 'Robust Testing of AI Language Model Resiliency with Novel Adversarial Prompts'. Together they form a unique fingerprint.

Cite this