Simply enter your keyword and we will help you find what you need.

What are you looking for?

All Sessions

Workshop Day · 26th May

adonis_celestine

The Art of Breaking Smart Machines

Adonis Celestine

09:00h - 13:00h · May 26th · Track 1

Every breakthrough in AI brings new abilities but also new instabilities. Chatbots that charm can still go off-script; knowledge engines can overshare; helpful agents can be misled. Adversarial testing exposes these failures before users do. In this hands-on tutorial, we’ll explore how to break smart systems through prompt injection, jailbreaks, RAG data leaks, and agent misuse scenarios. You’ll learn to turn AI’s mistakes into insights, strengthen its guardrails, and build trust through resilience. We’ll also dive into practical tools like Promptfoo and Pyrit to simulate real-world attacks and measure defenses. To trust AI, first, try breaking it.

alex_soto

Testing in the non-predictable World

Alex Soto

14:00h - 18:00h · May 26th · Track 1

As AI applications become increasingly integrated into critical applications, ensuring their reliability, safety, and fairness is more challenging than ever. Unlike traditional software, AI models are dynamic and data-driven, often behaving unpredictably in real-world scenarios. This makes testing AI systems a complex endeavour that requires different approaches from those used in traditional applications.

rob_sabourin

Automating Test Design with a Little Help from Generative AI

Rob Sabourin

09:00h - 13:00h · May 26th · Track 2

In this hands-on workshop, Rob Sabourin shares over forty years of experience in automated test design—now enhanced with Generative AI. Drawing from real-world successes, failures, and surprises, he demonstrates how AI can assist testers in designing effective, creative, and efficient tests. Participants will explore proven design techniques such as pairwise analysis, decision tables, and path analysis through interactive exercises using ChatGPT, Python, PICT, 7 Excel. Attendees receive practical examples and case studies while learning how to harness AI’s strengths responsibly. The key insight: Generative AI amplifies great testing—but never replaces critical thinking or design fundementals.

razvan_vancea

Mastering Playwright: Step-by-Step Guide to Building a Scalable Testing Framework with AI and Codegen

Razvan Vancea

14:00h - 18:00h · May 26th · Track 2

In this hands-on workshop, Razvan will guide participants through building a scalable Playwright test automation framework for both Web and API testing. Attendees will learn best practices, leverage AI tools like GitHub Copilot and Playwright MCP server for faster test creation, and set up CI/CD pipelines with parallel execution in GitHub Actions. Through practical exercises, they will gain real-world experience in writing, debugging, and optimising tests. By the end of the session, participants will walk away with a fully functional, boilerplate framework and the knowledge to implement it in their own projects.

daniela_maissi

Fundamentals of Web Pentesting for Quality Analysts

Daniela Maissi

09:00h - 13:00h · May 26th · Track 3

This workshop tells the story of how web penetration testing evolved from basic scans into a core security practice. As applications grew more complex, attackers adopted advanced tooling and evasion techniques, pushing defenders to go beyond traditional QA. Through hands-on use of Nmap, sqlmap, and Burp Suite, participants will explore firewall evasion, red teaming concepts, and structured pentesting methodologies aligned with OWASP. The session shows why QA teams must adopt an adversarial mindset, learning not only how vulnerabilities are exploited, but how to report and remediate them effectively before they reach production.

leandro_melendez

Ramping Up to the Modern Performance

Leandro Melendez

14:00h - 18:00h · May 26th · Track 3

Modern software moves fast, with services constantly deployed and scaled, making traditional performance testing approaches obsolete. In this workshop, Leandro Melendez introduces modern, agile, and continuous performance testing. You’ll learn core performance assurance principles-from fundamentals like scripting to advanced practices like CI-integrated checks, synthetics, observability, and touches of AI. Leandro clarifies what performance testing truly is and how it differs from assurance and load testing. You’ll also learn how to evolve legacy performance practices into agile workflows and assemble a complete continuous performance framework using tools like k6, Grafana, Locust and more!

ard_kramer

The change game of quality

Ard Kramer & Szilard Szell

09:00h - 18:00h · May 26th · Track 4

In this workshop, participants will step into the role of a quality team in a fictional company, facing dilemmas around prioritisation, stakeholder management and risk communication under constraints of time and budget. Through an engaging simulation and iterative challenges, teams will explore the Risk Appetite framework, strategies for leading change, and the impact of technology maturity. Facilitators will introduce management challenges and guide reflection after each round. Attendees will sharpen skills in persuasion, resilience, and leadership, leaving equipped to identify, prioritise, and implement improvements that drive meaningful organisational change and strengthen product quality

Conference Day · 27th May

rob_sabourin

(KEYNOTE) The Generative AI Revolution: A Compass for Test Leaders Through Change

Rob Sabourin

09:45h - 10:35h · May 27th · Track 1

In this keynote, Rob Sabourin explores how Generative AI is reshaping testing leadership & transforming how teams manage change. Drawing on over forty years of experience, he offers a practical framework to help test leaders, programmers, and product owners navigate the intersection of technology disruption and business pressure. Rob shows how testing fundamentals can be decoupled from specific tools or technologies, focusing instead on evolving risk models and adaptive strategies. Attendees will gain insights into leveraging Generative AI responsibly, maintaining agility, and viewing testing as a continuous process of learning, innovation, and risk management in an ever-changing world.

ard_kramer

When Visions Collide: Lessons from a Cross-Border Testing Challenge

Ard Kramer

11:05h - 11:50h · May 27th · Track 1

What happens when two parties, each with strong opinions, must work together to deliver a critical system? Add evolving technology, security demands, and the pressure of continuous delivery, and you have a recipe for complexity. As a consultant, Ard was asked to step in and align a Dutch organisation and its foreign supplier on a common testing approach. The goal? Ensure that essential facilities in the Netherlands remain up to standard. Sounds straightforward – until you realize the two sides have very different ideas about what ‘quality’ means. The supplier’s view was simple: ‘We have requirements. If there’s a test case for each requirement, we’re done.’ But the users wanted more than a checklist – they needed proof the system would actually solve their problem. Two visions, one project, and a lot at stake.

daniela_maissi

How Attackers Are Using AI to Compromise Systems: Risks, Technical Insights, and Defensive Frameworks

Daniela Maissi

11:05h - 11:50h · May 27th · Track 2

This talk explores how adversaries are leveraging AI to enhance offensive capabilities- automating reconnaissance, generating exploits and poisoning models. The session combines real attack examples, demos and professional frameworks like MITRE ATLAS and Owasp Top 10 for LLM to help QA to understand, detect and mitigate AI-Driven threats.

raisa_lipatova

Replaying Logs as a Load Profile for MongoDB: Myth or Reality?

Raisa Lipatova

12:00h - 12:45h · May 27t · Track 1

In this talk, Raisa Lipatova will share her team’s experience developing a MongoDB log replay framework to conduct performance and capacity testing. She will explain why synthetic load approaches were insufficient and how replaying production logs offered a more accurate alternative. The talk will highlight major challenges – such as data anonymisation, handling diverse query types, managing caching effects, and scaling load – and the practical solutions created. Attendees will gain insights into real-world lessons, key metrics, and surprising discoveries, as well as learn how log replay can uncover bottlenecks, validate system capacity, and enable infrastructure optimisation.

bill_watson

Using AI to create UAT tests, a case study from BBC radio

Bill Watson

12:00h - 12:45h · May 27th · Track 2

The BBC’s UAT team wanted to explore whether AI could support the generation of test cases, so they designed a study using the rollout of an upgraded radio playout system as their test bed. This talk presents the methodology behind that study, including the KPIs tracked and the evaluation framework used. You’ll gain practical insights into how to structure a similar study in your own organisation, along with a look at the actual outcomes – what worked, what didn’t, and what was learned.

jalpa_soni

The Rabbit Hole of Grammar: LLMs on Trial with QA Adventures

Jalpa Soni

12:00h - 12:45h · May 27th · Track 3

Generative artificial intelligence and observability are revolutionising Quality Assurance (QA), making it possible to automate testing, predict bugs and optimise software quality with data-driven approaches. In this talk you will explore how QA is being redefined, from intelligent test case creation, synthetic data generation, to real-time bug detection and bug simulation with AI-powered Chaos Testing. Through real-world case studies and innovative tools, in this talk with Diogo Goncalves Candeias, you will discover how to improve testing efficiency, reduce reliance on manual testing and adopt a strategic approach based on AI and data.

nicole_van_der_hoeven

(KEYNOTE) Asimov's Zeroth Law of Robotics: Observability for AI

Nicole van der Hoeven

14:15 - 15:05h · May 27th · Track 1

A robot may not harm humans. A robot must obey humans. A robot must protect its own existence. These are Isaac Asimov’s three Laws of Robotics, created to govern the ethical programming of artificial intelligences. From the Butlerian Jihad to Skynet to cylons, we’ve been immortalizing our collective nightmares about artificial intelligence for years. But there’s an unmentioned law that comes as a prerequisite to all of that: a robot must be observable.

adha_hrusto

AI Meets API: Transforming Automated Testing Processes

Adha Hrusto

15:15h - 16:00h · May 27th · Track 1

This talk explores the transformative integration of AI into automated API testing. By leveraging advanced prompt engineering techniques and large language models, we generate test models in a predefined, structured format crafted to reflect both contract and system testing requirements from comprehensive business needs and OpenAPI specification. These models are then seamlessly transformed into executable test code through a templating mechanism, which guarantees consistency and reliability. The result is a powerful process that leverages AI to explore a wide spectrum of tests while ensuring that the stability and quality of the final code remain intact.

dragan_spiridonov

Classical QE + Agentic Principles: Building Bridges, Not Burning Them

Dragan Spiridonov

15:15h - 16:00h · May 27th · Track 2

Dragan Spiridonov shares production lessons from achieving 60% faster test generation by extending classical QE practices with agentic principles rather than replacing them. Learn how TDD, exploratory testing, and context-driven methods become more powerful when amplified by AI agents – not obsolete. Through real examples of what worked (agent-assisted exploration, encoded testing heuristics, strategic human checkpoints) and what failed (blind autonomy without classical foundations), attendees gain a practical five-step playbook for evolving their testing practices. This talk cuts through AI hype to show how bridge-building outperforms replacement strategies.

gil_zilberfeld

(MASTERCLASS) The AI Oracle: Building and Using Golden Datasets for Quality

Gil Zilberfeld

15:15h - 17:15h · May 27th · Track 3

How do you test something that gives a different, “correct” answer every time? In this hands-on masterclass, Gil Zilberfeld tackles this fundamental challenge in AI quality. He introduces a disciplined, practical approach to creating a ‘golden dataset’: a curated source of truth that acts as an oracle for validation. Attendees will learn how to manually build their own golden set from fuzzy AI responses and then use it to power simple, automated checks. Gil will also cover how to manage this dataset as a living asset, providing a powerful, repeatable technique for ensuring AI quality.

szilard_szell

Trust in AI Agents: The Communication & Oracle Problems

Szilárd Széll

16:30h - 17:15h · May 27th · Track 1

In his talk talk Szilard would like to discuss the challenges of communication and oracle in AI Agents in the SW development process (Developing or testing agents), and sharing the key concepts of designing the new AI Agentic Development process to provide high quality outcomes, focusing on the three levels of: LLMs, AI Agent design and AI Orchestration.

sharath_byregowda

From Flaky Pipelines to Strategic Business Driver: A 2+ Year Journey of Continuous Marginal Gains

Sharath Byregowda

16:30h - 17:15h · May 27th · Track 2

In this talk, Rafaela Azevedo tackles the frontiers of software testing, deep diving into the exciting world of Web3 applications. She’ll equip you with the tools and know-how to confidently end-to-end (E2E) tests for these decentralised experiences. Get ready to figure out the power of Synpress, an E2E testing framework.

Conference Day · 28th May

pablo_garcia

(KEYNOTE) Doing things right the First Time, is the fastest and cheapest way to create software

Pablo Garcia

09:45h - 10:35h · May 28th · Track 1

Most projects starts out with uncompleted requirements, new teams and no test strategy, but with big hopes of creating the “Perfect Software”. That is no problem, Agile methods supports fast development and high Quality, right? Pablo will talk about what most often happens in a development team at the start and how it impacts the delivery. The waste produced often in teams does not only affect the project, it affects the company. Pablo will talk deeper of the effects and what you need to do to avoid it. In his talk he explains how to understand your environment and start taking the steps you need to change the negative spiral of creating waste to start being productive again.

rodrigo_martin

Observability in testing: Lessons from the real world

Rodrigo Martin

11:05h - 11:50h · May 28th · Track 1

Rodrigo will break down how to use Observability in software testing to spot problems early and keep systems running well. He’ll cover the core ideas behind Observability and show practical ways to track testing events using practical examples. These methods can be applied to whatever tools you use. The talk is for anyone looking to improve their software’s quality and tackle the challenges of working with complex distributed systems. Attendees will leave with solid techniques to boost their testing practices.

thibault_schnellbach

Improving accessibility shifting testing left

Thibault Schnellbach & Nohelia Borjas

11:05h - 11:50h · May 28th · Track 2

In this talk, Thibault Schnellbach & Nohelia Borjas shares how in the context of the European Accessibility Act 2025, they have integrated accessibility testing as part of their shift left testing strategy in particular using automated tests. They’ll present their approach to modify the mindset regarding the overall QA activities to test earlier within the development lifecycle and share a demo of the technical solution they have implemented in our organisation to fulfill the accessibility standards. They’ll present the components they have integrated into their test pipeline ‘AXE core + Allure report’, highlight the main impact in the code repository and show how to run the test in both the local environment and the project pipeline.

alex_cusmaru

Lessons Learnt from Testing Safety Critical Systems

Alex Cusmaru

12:00h - 12:45h · May 28th · Track 1

In his talk, after providing some context about what testing in a regulated environment means, Alex will walk you through five ‘hidden skills’ he discovered over 16 years of testing, integrating and validating complex, safety-critical railway systems. He hopes you will discover some underrated skills that can turn good testing into great testing.

cedomir_zivkovic

From Console to Contract: Ethical API Hacking & Debugging That Wins Practical API testing, DevTools tricks and a true endpoint-hacking story that led to a client win.

Cedomir Zivkovic

12:00h - 12:45h · May 28th · Track 2

Discover practical techniques for API testing, browser debugging, and pragmatic QA that every tester should know. This talk covers Postman workflows, Console and Network tab inspection, and safe “hacking mindset” approaches to uncover hidden endpoints and validate system behavior. Learn how to turn findings into automated tests, high-quality bug tickets, and clear acceptance criteria for Stories and Tasks. Attendees will leave with actionable skills to improve testing efficiency, accuracy, and real-world impact.

rhian_lewis

(KEYNOTE) The Glitch in the Matrix: what science fiction can teach us about software quality

Rhian Lewis

14:15h - 15:05h · May 28th · Track 1

To understand the present, we should look to the past. But in software we need to look to the future. The Matrix, Minority Report, 2001: A Space Odyssey, Westworld, RoboCop, and The Terminator all imagine worlds where software doesn’t always do what its creators intended. This keynote explores science fiction as a source of potential quality failings. As reality catches up with sci-fi, these aren’t just fun parallels. They’re early warnings.This keynote connects some famous fictional bugs to modern testing practices, showing how testers can expand their threat models, heuristics, and ethics filters for the new reality we are building.

leandro_melendez

The Survival Guide to the Perfpocalypse

Leandro Melendez

15:15h - 16:00h · May 28th · Track 1

The world of performance testing is changing faster than ever, and many engineers feel lost as the landscape of QA, IT, and everything around them keeps accelerating. In this talk, Leandro shares the essential rules to stay relevant in a field where even major consultancies report getting fewer performance projects. You’ll explore a survival rulebook covering when load testing matters, how to expand your performance skills, the rising role of observability, synthetics, teamwork, and even a touch of chaos engineering. All of this comes delivered in Leandro’s signature style with storytelling, humor, dramatism, and the occasional taco reference.

diego_molina

From Polling to Events: The Future of Browser Automation

Diego Molina

15:15h - 16:00h · May 28th · Track 2

Selenium, Playwright, and Cypress have been competing over the last years to become the leader in browser automation. Users have been switching between tools, then switching again, and sometimes switching back. This talk will explain how they offer an event-based approach to common automation use cases. The attendants will observe how the traditional approach works and how the event-based approach is applied in practice. This is not about Selenium vs. Playwright vs. Cypress. This is about how each tool, through its own approach and implementation, is targeting more reliable automation through browser events.

alap_patel

(MASTERCLASS) Beyond Test Automation: Harnessing Agentic AI and Multi-Agent workflows

Alap Patel

15:15h - 16:55h · May 28th · Track 3

It’s crucial to comprehend the primary risks and vulnerabilities to which our products are exposed to during their lifetime, because “those who do not learn history are condemned to repeat it”. At this masterclass with Sara Martínez Giner, you’ll take on the challenge of creating a quality and cybersecurity culture. You’ll see code analysis tools and take a hands on approach, breaking code, hacking labs security test automation using ChatGPT as support tool CI: Github Actions.

andrei_contan

Making room for testing - empowering teams to deliver business value with confidence

Andrei Contan

16:10h - 16:55h · May 28th · Track 1

Many teams proudly report “80%+ test coverage” and still ship outages. On paper, projects show solid unit and integration tests. In reality, they have a stubborn, double-digit change failure rate, frequent production issues, and a growing backlog of defects. High coverage is telling the level of busyness, not the confidence of a safe change. This talk shares how teams confronted that gap and rebuilt approach to quality using DORA metrics and actionable, cross-platform, cross-craft, quality practices.

fernando_teixeira

Testing Cloud Applications Without Breaking the Bank: Testcontainers and LocalStack

Fernando Teixeira

16:10h - 16:55h · May 28th · Track 2

How do you test an application that relies heavily on cloud services? Do you have a specific strategy for testing it, or do you simply run your tests regardless of the infrastructure costs? Testing cloud applications doesn’t have to break the bank—or your sanity. With tools like Testcontainers and LocalStack (free and open-source tools), you can create robust and realistic tests that mimic production cloud services, all without costly infrastructure or complex setups. By the end of this talk, you will have actionable insights on how to optimize your testing process, lower your infrastructure costs, and ensure your cloud applications are ready for production.