Services Team Enablement Workshop
🎯

Team Enablement Workshop

Hands-on sessions that give your engineering and security teams shared vocabulary and practical skills for identifying and mitigating LLM-specific risks.

Get in touch about this →

The problem

Most security engineers haven’t built LLM systems. Most LLM engineers don’t think like security engineers. This gap is hard to spot until something goes wrong.

Generic security awareness training doesn’t cover prompt injection, agentic risk, or RAG security. Vendor documentation doesn’t prepare your team for real attacks. This workshop does.

What I cover

Core LLM Security Concepts

  • How LLMs work well enough to understand the attack surface
  • The OWASP LLM Top 10 — applied to real systems, not just theory
  • Prompt injection and indirect prompt injection — live demonstrations with your own systems or realistic examples
  • Data extraction, system prompt leakage, and jailbreak techniques

Agentic and Tool-Using Systems

  • Security properties that emerge when LLMs can take actions
  • Confused deputy attacks and privilege escalation through agents
  • Designing tool boundaries and input validation for agentic systems

Practical Threat Modeling for AI

  • How to extend your existing threat modeling practice to cover LLM components
  • What to add to your security review process for AI features
  • Building checklists and testing criteria your team can use independently after the workshop

Hands-On Exercises

We work through realistic scenarios using your system or a representative example. Your team practices identifying vulnerabilities and evaluating mitigations — not just listening to slides.

Formats

Half-day — Core concepts and one hands-on exercise. Good for leadership or mixed technical/non-technical audiences getting oriented.

Full-day — Deep technical coverage with multiple exercises. For engineering and security teams who will be building or reviewing AI systems.

Multi-session series — For teams going through an ongoing AI build who want guidance across multiple sessions as their system evolves.

Who this is for

  • Engineering teams building LLM-powered features who want their developers thinking about security as they build
  • Security teams that need to get up to speed on LLM-specific risks so they can effectively review AI systems
  • Organizations about to launch an AI product who want their whole team aligned on the risk landscape before go-live
  • Teams that completed a security assessment and want to build internal capability to address findings going forward

What’s included

  • Pre-workshop call to tailor content to your team and systems
  • Workshop materials your team keeps afterward
  • Follow-up Q&A session after the workshop

Get in touch to discuss your situation →

Sounds like a fit?

A short call is usually enough to figure out whether this is what you need and what it would look like.