AI Security Consulting
Security for teams building
with LLMs
Prompt injection, data leakage, agentic risk, supply chain exposure β these aren't caught by traditional security tooling. I work with engineering and security teams to find and fix them before they become incidents.
Security Assessment
A structured technical review of your LLM deployment. Findings ranked by severity, with reproduction steps and specific remediation guidance.
Learn more βImplementation Consulting
Embedded alongside your team during the build β designing security in from the start rather than retrofitting it after launch.
Learn more βTeam Enablement
Hands-on workshops that give your engineers and security team shared vocabulary and practical skills for LLM-specific risks.
Learn more βFocus areas
What I assess and secure
Specific expertise in how LLM systems fail β not generic cybersecurity rebranded.
Prompt Injection
Direct and indirect injection, jailbreaks, and multi-turn manipulation that override system intent.
Data Leakage
Paths where training data, system prompts, or user data can be extracted through model outputs.
Agentic System Risk
Tool-using agents introduce privilege escalation and confused deputy attacks that don't exist in traditional systems.
Supply Chain & Model Risk
Third-party models, RAG data sources, fine-tuning datasets, and embedding pipelines as attack vectors.
Access Controls
How user identity, permissions, and data isolation are enforced β or not β across your LLM infrastructure.
Monitoring & Response
Logging, anomaly detection, and incident response playbooks built for how LLM systems actually fail.
My background is in securing critical systems across regulated industries. More recently I've been working with enterprise clients on building and deploying LLM-based systems β model selection, RAG pipelines, agentic workflows, production on Kubernetes.
That combination puts me in an unusual position: I understand how these systems are built, and I understand how they're broken. That's the gap Molntek exists to close.
Weekly AI security insights
Practical guidance on LLM risks, prompt injection, and agentic security β delivered every week. Free.
Ready to talk?
Tell me what you're building. I'll tell you honestly whether I can help and what that would look like.