IntermediateCybersecurity for AI
Prompt Hacking & GenAI Defense
Day 2 turns security teams into red-teamers. Execute real prompt-injection, jailbreak, and RAG exploits, then design layered guardrails, content safety filters, and monitoring pipelines to stop them in production.
Prerequisites
- Completion of AI Security Essentials or equivalent hands-on exposure
- Python development plus experience calling LLM APIs
- Understanding of API and application security fundamentals
Who Should Attend
- Data Scientists
- ML Engineers
- Research Engineers building production models
- Mid-level engineers expanding their skill set
Course Outline
- 1Prompt-injection playbooks: direct, indirect, cross-agent, and delayed payloads
- 2Live jailbreaking techniques including DAN, role-play, and multilingual attacks
- 3System prompt extraction demos with blue-team countermeasures
- 4Designing pre/in/post-processing guardrails and structured output enforcement
- 5Securing RAG pipelines and vector databases
- 6Implementing Bedrock Guardrails, Azure AI Content Safety, and HarmBench red teaming
Learning Outcomes
- Execute and defend against modern prompt-injection attacks
- Design multi-layer guardrails without degrading UX
- Harden RAG pipelines, embeddings, and retrievers
- Operationalize Bedrock/Azure safety tooling and HarmBench workflows
- Document repeatable red-team scenarios for GenAI apps
Related Courses
Beginner
AI Security Essentials
Day 1 of the intensive teaches defenders how GenAI and LLM apps are attacked. Map new AI threat surfaces, apply OWASP LLM Top 10 mitigations, and build threat models that drive pragmatic controls.
Advanced
Agentic AI Security & Governance
Day 3 focuses on securing autonomous, tool-using AI agents at enterprise scale. Architect zero-trust controls, map threats with MITRE ATLAS & STRIDE-AI, and align deployments with EU AI Act, NIST RMF, and ISO 42001.
Ready to Get Started?
Contact us to schedule training for your team or inquire about upcoming sessions.