AdvancedCybersecurity for AI
Agentic AI Security & Governance
Day 3 focuses on securing autonomous, tool-using AI agents at enterprise scale. Architect zero-trust controls, map threats with MITRE ATLAS & STRIDE-AI, and align deployments with EU AI Act, NIST RMF, and ISO 42001.
Prerequisites
- Intermediate AI security knowledge or 2+ years AI/ML engineering
- Experience with cloud infrastructure and MLOps toolchains
- Familiarity with enterprise security or compliance frameworks
Who Should Attend
- Data Scientists
- ML Engineers
- Research Engineers building production models
- Senior engineers scaling production systems
- Technical architects designing enterprise solutions
Course Outline
- 1Agentic AI threat landscape: scheming, goal misalignment, tool abuse
- 2Zero-trust reference architectures for autonomous agents
- 3MITRE ATLAS + STRIDE-AI threat modeling and kill-switch design
- 4Adversarial ML defenses covering extraction, inference, and poisoning
- 5MLOps security: secure CI/CD, container hardening, secrets rotation
- 6Governance and compliance alignment with EU AI Act, NIST RMF, ISO 42001
Learning Outcomes
- Architect zero-trust guardrails for multi-agent systems
- Produce enterprise-ready AI threat models using MITRE ATLAS
- Implement adversarial ML defenses across the model lifecycle
- Secure AI platform operations, secrets, and supply chain
- Align AI governance programs with global regulatory frameworks
Related Courses
Beginner
AI Security Essentials
Day 1 of the intensive teaches defenders how GenAI and LLM apps are attacked. Map new AI threat surfaces, apply OWASP LLM Top 10 mitigations, and build threat models that drive pragmatic controls.
Intermediate
Prompt Hacking & GenAI Defense
Day 2 turns security teams into red-teamers. Execute real prompt-injection, jailbreak, and RAG exploits, then design layered guardrails, content safety filters, and monitoring pipelines to stop them in production.
Ready to Get Started?
Contact us to schedule training for your team or inquire about upcoming sessions.