Every LLM call deserves
a control layer.
We started Autrace because shipping AI features fast and shipping them safely shouldn't be a trade-off. Most teams skip the control layer entirely - no policy enforcement, no PII guardrails, no audit trail. That's a ticking clock, not a feature.
Autrace is the proxy that sits between your application and any LLM provider. One endpoint change. Policy enforcement, PII filtering, and cryptographically chained logs on every call - no SDK changes, no code rewrites, no trade-offs.
"Make production AI observable, auditable, and policy-enforced - by default, for every team."
How we work
We document exactly what Autrace covers and what it doesn't. Our security page lists the 5 OWASP LLM Top 10 categories we address - and the 5 we don't. No marketing theatre.
Our gateway is open-core. The core proxy, policy engine, and audit trail are MIT-licensed. You can read every line that touches your prompts. No black boxes in the trust boundary.
PII filtering, policy enforcement, and audit logging aren't add-ons. They're in the critical path of every proxied request. You can't accidentally skip them.
One environment variable swap to redirect your LLM calls. Native OpenAI SDK compatibility. Prometheus metrics. Structured JSON logs. Designed to work the way you already work.
Open-core model
The Autrace gateway - core proxy, policy engine, PII filter, audit trail - is MIT-licensed and fully self-hostable. If you want to run it on your own infrastructure, read every line of code, and never pay us a cent, that's a supported use case.
The cloud SaaS adds managed infrastructure, automatic updates, compliance certifications (SOC 2, HIPAA), and support SLAs. You pay for operations and compliance assurance - not for the right to use the software.
Timeline
First prototype of the policy-enforced LLM proxy - 3 rules, 1 provider, no logging.
PII filtering, immutable audit trail, and OpenAI-compatible proxy shipped. First closed beta users.
Unified integration layer: 30+ LLM providers accessible through a single API key. Cost-based routing added.
SSRF protection, payload size limits, prompt injection detection, OWASP LLM Top 10 audit completed.
Open-core release. Cloud Enterprise SaaS in public beta.
Working with auditors on Type II certification. Target completion Q3 2026.
Dedicated infrastructure + data residency options + HIPAA Business Associate Agreement.
Open roles
We're a small team building foundational infrastructure for production AI. If that sounds like the right problem, we'd like to talk.
Improve gateway throughput, build new policy execution primitives, harden the audit trail. You'll work in Node.js/TypeScript with Fastify, PostgreSQL, Redis.
Own OWASP LLM coverage expansion, drive SOC 2 compliance, build internal security tooling, and be the first reviewer on every change that touches the policy engine.
Help engineers understand why LLM security matters before incidents happen. Write technical content, maintain example integrations, and talk to users every day.
Don't see a role that fits? Send us a note - we're always interested in strong engineers and security practitioners.