All posts
OWASPSecurityLLM

OWASP LLM Top 10 Explained: What Each Risk Means in Practice

AE
Autrace Engineering
·March 17, 2026·12 min read

The OWASP LLM Top 10 catalogues the most critical security risks in LLM applications. Here's each risk with concrete examples and honest notes on coverage.

LLM01: Prompt Injection

Attacker-controlled input overrides system instructions. Direct (user crafts the attack) and indirect (attack embedded in retrieved data) variants. Autrace applies pattern-based detection in the inference path. Partially covered.

LLM02: Insecure Output Handling

Model output passed to downstream systems (shell, browser, database) without sanitization. Autrace filters input - output sanitization is an application responsibility. Input side only.

LLM03: Training Data Poisoning

Malicious data injected into training sets. This risk exists at training time, before deployment. Autrace operates at inference time. Out of scope.

LLM04: Model Denial of Service

Requests designed to consume excessive compute - extremely long contexts, recursive prompts. Autrace enforces payload size limits (default 1MB) and per-key rate limits. Covered.

LLM05: Supply Chain Vulnerabilities

Compromised model weights, dependencies, or fine-tuning datasets. A build/deployment-time risk. Autrace provides runtime controls only. Out of scope.

LLM06: Sensitive Information Disclosure

Sensitive data in request payloads forwarded to external APIs. Autrace's PII filter redacts or blocks sensitive entities before they reach the model. Covered.

LLM07: Insecure Plugin Design

LLM plugins with excessive permissions enable privilege escalation. An application architecture concern - how you design the tools the model can call. Autrace doesn't manage plugin permissions. Application responsibility.

LLM08: Excessive Agency

Models granted too many permissions take destructive actions. Autrace policy rules can block requests to specific models or with specific content, providing partial mitigation. Partially mitigable.

LLM09: Overreliance

Users or systems trust model outputs without verification, leading to decisions based on hallucinations. A human-factors and product design problem. Audit trails help with accountability but don't prevent overreliance. Design responsibility.

LLM10: Model Theft

Systematic extraction of model behavior via repeated queries. Rate limiting increases extraction cost significantly. Full prevention requires additional controls at the provider level. Partially covered (rate limiting).

← Back to blogContact Enterprise Sales →