AI Security Consulting

Security for teams buildingwith LLMs

Prompt injection, data leakage, agentic risk, supply chain exposure โ€” these aren't caught by traditional security tooling. I work with engineering and security teams to find and fix them before they become incidents.

๐Ÿ”

Security Assessment

A structured technical review of your LLM deployment. Findings ranked by severity, with reproduction steps and specific remediation guidance.

Learn more โ†’
๐Ÿ›ก๏ธ

Implementation Consulting

Embedded alongside your team during the build โ€” designing security in from the start rather than retrofitting it after launch.

Learn more โ†’
๐ŸŽฏ

Team Enablement

Hands-on workshops that give your engineers and security team shared vocabulary and practical skills for LLM-specific risks.

Learn more โ†’

Focus areas

What I assess and secure

Specific expertise in how LLM systems fail โ€” not generic cybersecurity rebranded.

Prompt Injection

Direct and indirect injection, jailbreaks, and multi-turn manipulation that override system intent.

Data Leakage

Paths where training data, system prompts, or user data can be extracted through model outputs.

Agentic System Risk

Tool-using agents introduce privilege escalation and confused deputy attacks that don't exist in traditional systems.

Supply Chain & Model Risk

Third-party models, RAG data sources, fine-tuning datasets, and embedding pipelines as attack vectors.

Access Controls

How user identity, permissions, and data isolation are enforced โ€” or not โ€” across your LLM infrastructure.

Monitoring & Response

Logging, anomaly detection, and incident response playbooks built for how LLM systems actually fail.

AR
Amin Raji โ€” Founder, Molntek

I've spent years building and deploying LLM systems for enterprise clients โ€” working across the full stack from model selection and fine-tuning to production infrastructure on Kubernetes. The gap I kept running into: engineering teams building sophisticated AI systems without security expertise embedded in the process, and security teams asked to review systems they didn't fully understand.

Molntek is how I work with organizations that need someone who can speak both languages fluently โ€” deep enough in AI engineering to understand how these systems actually behave, and security-focused enough to find where they break.

Ready to talk?

Tell me what you're building. I'll tell you honestly whether I can help and what that would look like.