Simply enter your keyword and we will help you find what you need.

What are you looking for?

Security Testing + AI

← Back to timetable

MAIN TRACK TALK

How Attackers Are Using AI to Compromise Systems: Risks, Technical Insights, and Defensive Frameworks

This talk explores how adversaries are leveraging Artificial Intelligence to enhance offensive capabilities – automating reconnaissance, generating exploits, and poisoning models – while exposing how unregulated or ‘vibe-driven’ AI use can quickly become a security liability. This talk combines real attack examples, demos, and professional frameworks like MITRE ATLAS and the OWASP Top 10 for LLM Applications to help QA and security teams understand, detect, and mitigate AI-driven threats in the software development lifecycle.

Attackers are already using AI to amplify every stage of the attack chain. Integrating AI without security validation introduces invisible attack surfaces. QA and security testers must adopt adversarial testing, leverage OWASP and MITRE frameworks, and make AI security testing a baseline practice – not an afterthought.


What you’ll learn


Understand how attackers are using AI to scale reconnaissance, exploitation, and data exfiltration.


Identify the new categories of risk introduced by Large Language Model (LLM) integrations.


Map AI-driven attacks and vulnerabilities to MITRE ATLAS and OWASP Top 10 for LLMs.


Design practical QA security tests for prompt injection, model misuse, and insecure output handling.


Establish secure development and validation practices for AI-augmented systems.


Session details

Track 2

11:05h - 11:50h · May 27th

40 mins + 5 mins Q&A

Security Testing + AI

General Level

Talk in English, translated to Spanish

daniela_maissi

Daniela Maissi

Security Researcher, DevSecOps and Penetration Tester with 11 years of experience in IT. Currently working as security researcher at Owasp Foundation and prev EC Council as one of the winners of Top Researchers 2023.