Elephant Scale
IntermediateCybersecurity for AI

Prompt Hacking & GenAI Defense

Day 2 turns security teams into red-teamers. Execute real prompt-injection, jailbreak, and RAG exploits, then design layered guardrails, content safety filters, and monitoring pipelines to stop them in production.

Prerequisites

  • Completion of AI Security Essentials or equivalent hands-on exposure
  • Python development plus experience calling LLM APIs
  • Understanding of API and application security fundamentals

Who Should Attend

  • Data Scientists
  • ML Engineers
  • Research Engineers building production models
  • Mid-level engineers expanding their skill set

Course Outline

  1. 1Prompt-injection playbooks: direct, indirect, cross-agent, and delayed payloads
  2. 2Live jailbreaking techniques including DAN, role-play, and multilingual attacks
  3. 3System prompt extraction demos with blue-team countermeasures
  4. 4Designing pre/in/post-processing guardrails and structured output enforcement
  5. 5Securing RAG pipelines and vector databases
  6. 6Implementing Bedrock Guardrails, Azure AI Content Safety, and HarmBench red teaming

Learning Outcomes

  • Execute and defend against modern prompt-injection attacks
  • Design multi-layer guardrails without degrading UX
  • Harden RAG pipelines, embeddings, and retrievers
  • Operationalize Bedrock/Azure safety tooling and HarmBench workflows
  • Document repeatable red-team scenarios for GenAI apps

Ready to Get Started?

Contact us to schedule training for your team or inquire about upcoming sessions.