Services

AI Security Assessment

A structured technical review of your LLM deployment: identifying vulnerabilities, attack surfaces, and gaps before they become incidents.

OWASP Agentic Top 10 contributor Β· CISSP Β· PhD Β· 15+ years enterprise security

What I assess

LLM Attack Surfaces

Prompt injection, data leakage, jailbreaks, multi-turn manipulation β€” the full attack surface of your LLM deployment.

Agentic System Security

Tool use boundaries, privilege escalation, confused deputy attacks β€” security for agents that take actions.

Infrastructure & Access Controls

Authentication, data isolation, logging, secrets exposure β€” the foundation your LLM runs on.

Supply Chain & Model Risk

Third-party models, RAG data sources, fine-tuning data, vector store security β€” the risks in your stack.

How it works

Discovery call

I learn about your system, your stack, and what you're most concerned about.

Scoping

Clear scope document before anything starts. Timeline, access requirements, deliverables.

Assessment

Typically 2–4 weeks. I work with your team to get the access I need without disrupting operations.

Report & walkthrough

Written findings delivered, followed by a live session with your team.

Pricing

Standard

Single agent or LLM system

€ 18,000
€
  • Single agent or LLM system, single environment
  • 3-week turnaround
  • Written findings report
  • Prioritized remediation roadmap
  • Live walkthrough session
  • 90-day free re-test

Multi-agent

2 to 4 connected agents

€ 28,000
€
  • 2–4 connected agents
  • Multiple integrations
  • 4-week turnaround
  • All Standard deliverables

Common questions

Yes. Mutual NDA available before kickoff. The discovery call itself is enough to scope without one.

Pre-deployment is when assessment is most valuable. Most of my engagements are with systems still in active build.

I don't access production data. Assessments run against staging environments or synthetic data your team provides.

That's the point. Remediation guidance and the 90-day re-test are included so the engagement ends with a safer system, not just a list of findings.

Yes. The methodology section is designed to satisfy procurement reviewers asking for evidence of independent third-party AI security assessment.

€5M professional liability coverage. Certificate available on request.

Because the methodology is repeatable and I'm not running an audit firm. The price reflects what the work actually takes, not what the market will tolerate.

Why this matters

Most teams don’t know where their AI system is vulnerable until a third party finds it. By then, the damage is done and the remediation is expensive.

An AI Security Assessment gives you a structured, technical review of your LLM deployment before that happens. You get a written report with every finding documented β€” severity level, how to reproduce it, and specific remediation guidance your engineers can act on directly.

Methodology

Assessments follow the Molntek Agentic AI Security Methodology: a structured framework mapped to OWASP Agentic Top 10 (ASI01 to ASI10), with cross-references to NIST AI RMF, MITRE ATLAS, and EU AI Act Article 15 in the deliverable. The same methodology runs every engagement, which is why the price is fixed.

What’s included

A written findings report with severity rankings, reproduction steps, and a prioritized remediation roadmap. Delivered with a live walkthrough session for your engineering and security teams.

Every tier includes a 90-day free re-test if you implement the recommended remediations. The goal is a safer system, not a finding count.

See an example report

A sanitized sample report from a synthetic engagement is available on request. It shows the full report structure, severity ranking format, OWASP Agentic Top 10 mapping, and remediation roadmap layout you would receive at the end of a real engagement.

Request the sample report β†’

Sounds like a fit?

A short call is usually enough to figure out whether this is what you need and what it would look like.