Services
Team Workshop
Hands-on sessions that give your engineering and security teams shared vocabulary and practical skills for identifying and mitigating LLM-specific risks.
OWASP Agentic Top 10 contributor · CISSP · PhD · 15+ years enterprise security
What I cover
How LLMs work, the OWASP LLM Top 10, prompt injection, data extraction, and jailbreak techniques.
Security properties when LLMs take actions, confused deputy attacks, designing tool boundaries.
Extend your existing threat modeling to cover LLM components, build checklists and testing criteria.
Realistic scenarios using your system or a representative example. Your team practices identifying vulnerabilities.
What's included
Pre-workshop call
Tailor content to your team and systems before we begin.
Workshop delivery
Core concepts, hands-on exercises, and practical threat modeling for your actual systems.
Materials & follow-up
Workshop materials your team keeps afterward, plus a follow-up Q&A session.
Pricing
Half-day
3 to 4 hours
- Core concepts and one hands-on exercise
- Good for leadership or mixed audiences
- Workshop materials included
- Follow-up Q&A session
Full-day
6 to 7 hours
- Deep technical coverage
- Multiple hands-on exercises
- For engineering and security teams
- Workshop materials included
- Follow-up Q&A session
Multi-session series
3 to 5 sessions over 4–8 weeks
- Ongoing guidance across sessions
- For teams on an ongoing AI build
- Guidance as your system evolves
- All materials and sessions included
Common questions
Yes. The pre-workshop call is specifically for tailoring content to your team, your systems, and your concerns.
No. We use staging environments, synthetic data, or representative examples. Nothing touches production.
The workshop is designed for mixed audiences. Engineering and security teams attend together, and the exercises are scoped to different skill levels.
Workshop materials, checklists, and testing criteria your team can use independently after the workshop ends.
Yes. The multi-session series format is designed for teams going through an ongoing AI build who want guidance across multiple sessions.
The problem
Most security engineers haven’t built LLM systems. Most LLM engineers don’t think like security engineers. This gap is hard to spot until something goes wrong.
Generic security awareness training doesn’t cover prompt injection, agentic risk, or RAG security. Vendor documentation doesn’t prepare your team for real attacks. This workshop does.
Who this is for
- Engineering teams building LLM-powered features who want their developers thinking about security as they build
- Security teams that need to get up to speed on LLM-specific risks so they can effectively review AI systems
- Organizations about to launch an AI product who want their whole team aligned on the risk landscape before go-live
- Teams that completed a security assessment and want to build internal capability to address findings going forward
Pricing covers preparation, delivery, materials, and a follow-up Q&A session. Travel costs additional for in-person delivery outside the Nordic region.
Sounds like a fit?
A short call is usually enough to figure out whether this is what you need and what it would look like.