AI Testing
MAIN TRACK TALK
New Challenges in Software Quality Brought by GenAI and LLMs
Software quality in the context of generative artificial intelligence (GenAI) and Large Language Models (LLMs) has become increasingly crucial due to their growing influence in various industries. With the evolution of AI technologies, ensuring the reliability and robustness of software systems has become more challenging. Over time, the need for addressing issues such as data quality, bias detection, non-determinism in test automation, and privacy concerns has gained prominence.
In recent years, few companies have introduced software leveraging the power of generative artificial intelligence and Large Language Models to their users. Most organizations have engaged in Proof of Concepts to explore the potential advantages offered by these innovative technologies. Thus far, our interactions have been akin to playing in a sandbox – low risk, filled with innovation, and experimentation. Moving forward, there will be a growing necessity to *really* test and “guarantee” the quality of such software. The challenges posed by testing these applications are distinct from those encountered in assessing “traditional” software.
What you’ll learn
From this talk you will learn how to:
Session details

Federico Toledo
Driven by the challenge of making a positive impact through quality software, Federico Toledo boasts 20 years in the IT field. He’s the co-founder and Chief Quality Officer of Abstracta, a company focused on testing, innovation and development. Federico holds a degree in Computer Engineering from UDELAR in Uruguay, a Ph.D. in Computer Science from UCLM in Spain and is also a graduate of the Stanford + LBAN SLEI program. A renowned speaker and author, he also hosts the “Quality Sense” podcast and its namesake conference, and the Latin American branch of the prestigious Workshop on Performance and Reliability, showcasing his unparalleled commitment to software excellence.