Hands-On AI Security: Exploring LLM Vulnerabilities and Defenses

Description

As large language models (LLMs) rapidly integrate into critical systems, securing them against emerging threats is essential. In this session, we will explores real-world vulnerabilities—including prompt injection, data poisoning, model hallucination, and adversarial attacks—and shares practical defense strategies. Attendees will learn how to build effective threat models, apply secure-by-default

Start time

May 28, 2025 - 2:00pm