Testing AI
HALF-DAY WORKSHOP
Testing in the non-predictable World
As AI applications become increasingly integrated into critical applications, ensuring their reliability, safety, and fairness is more challenging than ever. Unlike traditional software, AI models are dynamic and data-driven, often behaving unpredictably in real-world scenarios. This makes testing AI systems a complex endeavour that requires different approaches from those used in traditional applications.
In this workshop, you will explore the unique challenges of testing AI applications, including handling probabilistic outputs, detecting bias, addressing model drift, and ensuring transparency in decision-making.
This workshop covers different aspects of testing AI applications, from code techniques, observability, and post-production tests.
We’ll also cover how to use AI to develop better tests or generate synthetic data to make tests more real than random data.
What you’ll learn
What you’ll need
You will see requirement soon…
Workshop details

Alex Soto
Alex Soto is a Director of Developer Experience at Red Hat. He is passionate about the Java world, software automation and he believes in the open-source software model. Alex has co-authored four books ‘Testing Java Microservices,’ ‘Quarkus Cookbook,’ ‘Kubernetes Secrets Management,’ ‘GitOps Cookbook’, and ‘RHCE Ansible Automation Study Guide’, and ‘Applied AI for Enterprise Java Development’ Recognized as a Java Champion since 2017, Alex is an esteemed international speaker who shares his knowledge and expertise at conferences and events worldwide. He also collaborates on radio at Onda Cero and imparts sessions as a teacher at Salle URL University.