AI Security Consulting

Security for teams buildingwith LLMs

Prompt injection, data leakage, agentic risk, supply chain exposure β€” these aren't caught by traditional security tooling. I work with engineering and security teams to find and fix them before they become incidents.

πŸ”

Security Assessment

A structured technical review of your LLM deployment. Findings ranked by severity, with reproduction steps and specific remediation guidance.

Learn more β†’
πŸ›‘οΈ

Implementation Consulting

Embedded alongside your team during the build β€” designing security in from the start rather than retrofitting it after launch.

Learn more β†’
🎯

Team Enablement

Hands-on workshops that give your engineers and security team shared vocabulary and practical skills for LLM-specific risks.

Learn more β†’

Focus areas

What I assess and secure

Specific expertise in how LLM systems fail β€” not generic cybersecurity rebranded.

Prompt Injection

Direct and indirect injection, jailbreaks, and multi-turn manipulation that override system intent.

Data Leakage

Paths where training data, system prompts, or user data can be extracted through model outputs.

Agentic System Risk

Tool-using agents introduce privilege escalation and confused deputy attacks that don't exist in traditional systems.

Supply Chain & Model Risk

Third-party models, RAG data sources, fine-tuning datasets, and embedding pipelines as attack vectors.

Access Controls

How user identity, permissions, and data isolation are enforced β€” or not β€” across your LLM infrastructure.

Monitoring & Response

Logging, anomaly detection, and incident response playbooks built for how LLM systems actually fail.

AR
Amine Raji β€” Founder, Molntek

My background is in securing critical systems across regulated industries. More recently I've been working with enterprise clients on building and deploying LLM-based systems β€” model selection, RAG pipelines, agentic workflows, production on Kubernetes.

That combination puts me in an unusual position: I understand how these systems are built, and I understand how they're broken. That's the gap Molntek exists to close.

Weekly AI security insights

Practical guidance on LLM risks, prompt injection, and agentic security β€” delivered every week. Free.

We respect your privacy. Unsubscribe anytime.

Ready to talk?

Tell me what you're building. I'll tell you honestly whether I can help and what that would look like.