Video source is missing or invalid.
Engineering Students
Industry Professional Software Engineers
AI Engineers
Cybersecurity Engineers
AI Security Engineers
Deploy, attack, and defend LLM-based systems in simulated environments. Learn real-world AI security practices, prompt injection, data exfiltration, and threat modeling for large language models.
Duration: 3 days
Level: Intermediate
Format: Virtual / On-site
Labs: Yes
Certification: Included
This course introduces participants to the ethical hacking and security of LLMs. You will gain hands-on experience with attack and defense strategies, secure prompt pipelines, and AI system hardening. By the end of this course, you will be able to identify and mitigate threats to LLM-powered services in cloud and data center environments.
The course is project-based and interactive, ensuring learners stay engaged while completing real-world simulations that mirror industry practices.
Threat modeling for LLM apps (prompt injection, data exfiltration, jailbreaks)
Secure prompt/response pipelines and guardrails
Secrets, tokens, and key rotation in cloud KMS
Network policies & egress filtering for model I/O
Monitoring: model misuse, drift, anomaly alerts
Incident response runbooks for AI services
Hands-on attack/defense labs for LLMs
LLM Platforms (OpenAI, HuggingFace)
Kubernetes & Istio
NVIDIA GPUs
Cloud KMS (AWS, GCP, Azure)
Open-source AI guardrails
Engineering students interested in AI security
Software engineers working with AI/ML systems
AI engineers deploying LLMs
Cybersecurity professionals focusing on AI infrastructure
AI security engineers
Python fundamentals
Basic Kubernetes and cloud knowledge
Introduction to Ethical Hacking for LLMs
Threat Modeling & Attack Vectors
Secure Prompt Pipelines & Guardrails
Secrets, Tokens & Key Management
Network Policies & Egress Controls
Monitoring & Incident Response
Capstone Lab: Attack & Defend an LLM Deployment
Simulate attacks on a deployed LLM system
Implement defense strategies
Complete a final assessment to validate learning
Learn how to detect misuse, identify anomalies, and respond to security incidents in LLM-powered applications. This lesson emphasizes hands-on monitoring of AI models in production and building actionable incident response runbooks.
Apply everything learned in previous modules to a simulated LLM environment. Practice ethical hacking techniques and implement defenses to secure the system. This lab is fully hands-on and project-based, designed to reinforce real-world skills.
— 28 February 2017