A Testing Framework for AI Linguistic Systems (testFAILS)

Yulia Kumar, Patricia Morreale, Peter Sorial, Justin Delgado, J. Jenny Li, Patrick Martins

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

This paper presents an innovative testing framework, testFAILS, designed for the rigorous evaluation of AI Linguistic Systems (AILS), with particular emphasis on the various iterations of ChatGPT. Leveraging orthogonal array coverage, this framework provides a robust mechanism for assessing AI systems, addressing the critical question, “How should AI be evaluated?” While the Turing test has traditionally been the benchmark for AI evaluation, it is argued that current, publicly available chatbots, despite their rapid advancements, have yet to meet this standard. However, the pace of progress suggests that achieving Turing-test-level performance may be imminent. In the interim, the need for effective AI evaluation and testing methodologies remains paramount. Ongoing research has already validated several versions of ChatGPT, and comprehensive testing on the latest models, including ChatGPT-4, Bard, Bing Bot, and the LLaMA and PaLM 2 models, is currently being conducted. The testFAILS framework is designed to be adaptable, ready to evaluate new chatbot versions as they are released. Additionally, available chatbot APIs have been tested and applications have been developed, one of them being AIDoctor, presented in this paper, which utilizes the ChatGPT-4 model and Microsoft Azure AI technologies.

Original languageEnglish
Article number3095
JournalElectronics (Switzerland)
Volume12
Issue number14
DOIs
StatePublished - Jul 2023

Keywords

  • AIDoctor
  • a testing framework for AI linguistic systems (testFAILS)
  • bots
  • chatbot validation
  • chatbots

Fingerprint

Dive into the research topics of 'A Testing Framework for AI Linguistic Systems (testFAILS)'. Together they form a unique fingerprint.

Cite this