Testing AI
HALF-DAY WORKSHOP
The Art of Breaking Smart Machines
AI systems can impress and misfire. This workshop is for testers and engineers who want to go beyond functional checks and uncover how AI really breaks. You will learn practical adversarial testing techniques like prompt injection, jailbreaks, RAG leakage, and agent misdirection, and use tools such as Promptfoo and Pyrit to simulate attacks and measure defenses. By the end, you will know how to expose vulnerabilities before customers do, build stronger guardrails, and turn AI failures into lasting improvements. If you want to build trustworthy AI, this is where it starts by learning how to break it first.
Every breakthrough in AI brings new abilities but also new instabilities. Chatbots that charm can still go off-script; knowledge engines can overshare; helpful agents can be misled. Adversarial testing exposes these failures before users do. In this hands-on workshop, we’ll explore how to break smart systems through prompt injection, jailbreaks, RAG data leaks, and agent misuse scenarios. You’ll learn to turn AI’s mistakes into insights, strengthen its guardrails, and build trust through resilience. We’ll also dive into practical tools like Promptfoo and Pyrit to simulate real-world attacks and measure defenses. To trust AI, first, try breaking it.
What you’ll learn
What you’ll need
Workshop details

Adonis Celestine
Adonis Celestine is the Senior Director of Automation Testing with Applause. With nearly two decades of software testing experience, Adonis is an automation expert, QA thought leader, published author and regular speaker at leading software testing conferences. Quality engineering is a passion for Adonis. He believes that true quality is a collective perspective of customer experiences, and that the natural evolution of QA is shaped by data and machine intelligence.