UAT Testing + AI
MAIN TRACK TALK
Using AI to create UAT tests, a case study from BBC radio
AI seems to be the technological zeitgeist, but just how effective is it in generating test cases? I wanted to see if we could come up with some actual data on this, but being able to run different tests concurrently on the same “fresh” software was problematic. So when an upgrade of the BBC Radio playout stack came along which involved different radio stations running the same tests it presented an opportunity to see how effective AI could be in writing UAT tests. We could compare different techniques to see what results they achieved.
In this talk Bill Watson details how the setting up of a study on the effectiveness of using AI generated UAT tests. The talk details the project selected for this, the methodology followed and the KPIs picked for comparison and rationales for doing so. It then examines the results and outlines best practices for creating UAT tests with (or without?) AI assistance. This is an ongoing study so at submission time we cannot predict the results but whatever they are the journey to getting them is the key, laying out an approach to evaluating how AI can work in testing
What you’ll learn
Session details

Bill Watson
Bill’s first PC was a Dragon 32 computer back in the depths of time… technically good but not popular. This led to trying to adapt programmes from other languages and lengthy debugging sessions, an ideal start to a life of test! Since that beginning Bill has three decades of experience in the IT industry. He’s worked in both contract and permanent roles in Europe and Africa ranging in scale from multinationals, to setting up test functions at small companies. He’s currently a Test Manager in the BBC, responsible testing outsourced Enterprise IT projects