Hardening AI Agents: The Vercel AI Static Analysis Standard
The first static analysis standard for AI-native applications. Automate protection against prompt injection and unvalidated agent inputs.

AI-native applications require a new security paradigm. Here is the first automated static analysis standard for the Vercel AI SDK, protecting your agents from prompt injection in CI/CD.
Quick Install
npm install --save-dev eslint-plugin-vercel-ai-security
Flat Config
// eslint.config.js
import vercelAI from "eslint-plugin-vercel-ai-security";
export default [vercelAI.configs.recommended];
Run ESLint
npx eslint .
You'll see output like:
src/chat.ts
8:3 error ๐ CWE-77 OWASP:LLM01 | Unvalidated prompt input
Risk: Prompt injection vulnerability
Fix: Use validated prompt: sanitizePrompt(userInput)
src/agent.ts
24:5 error ๐ OWASP:LLM08 | Tool missing confirmation gate
Risk: AI agent can execute arbitrary actions
Fix: Add await requireUserConfirmation() before execution
Rule Overview
| Category | Rules | Examples |
|---|---|---|
| Prompt Injection | 4 | Unvalidated input, dynamic system prompts |
| Data Exfiltration | 3 | System prompt leaks, sensitive data in prompts |
| Agent Safety | 3 | Missing tool confirmation, unlimited steps |
| Resource Limits | 4 | Token limits, timeouts, abort signals |
| RAG Security | 2 | Content validation, embedding verification |
| Output Safety | 3 | Output filtering, validation |
Quick Wins
Before
// โ Prompt Injection Risk
const { text } = await generateText({
model: openai("gpt-4"),
prompt: userInput, // Unvalidated!
});
After
// โ
Validated Input
const { text } = await generateText({
model: openai("gpt-4"),
prompt: sanitizePrompt(userInput),
maxTokens: 1000,
abortSignal: AbortSignal.timeout(30000),
});
Before
// โ Unlimited Agent
const { result } = await generateText({
model: openai("gpt-4"),
tools: dangerousTools,
});
After
// โ
Limited Agent
const { result } = await generateText({
model: openai("gpt-4"),
tools: safeTools,
maxSteps: 5,
});
Available Presets
// Security-focused configuration
vercelAI.configs.recommended;
// Full OWASP LLM Top 10 coverage
vercelAI.configs["owasp-llm-top-10"];
OWASP LLM Top 10 Mapping
| OWASP LLM | Rules |
|---|---|
| LLM01: Prompt Injection | require-validated-prompt, no-dynamic-system-prompt |
| LLM02: Insecure Output | require-output-filtering, no-unsafe-output-handling |
| LLM04: Model DoS | require-max-tokens, require-abort-signal |
| LLM06: Sensitive Data | no-sensitive-in-prompt, no-system-prompt-leak |
| LLM07: Plugin Design | require-tool-schema, require-tool-confirmation |
| LLM08: Excessive Agency | require-max-steps, require-tool-confirmation |
Customizing Rules
// eslint.config.js
import vercelAI from "eslint-plugin-vercel-ai-security";
export default [
vercelAI.configs.recommended,
{
rules: {
// Configure max steps
"vercel-ai/require-max-steps": ["error", { maxSteps: 10 }],
// Make RAG validation a warning
"vercel-ai/require-rag-content-validation": "warn",
},
},
];
Quick Reference
# Install
npm install --save-dev eslint-plugin-vercel-ai-security
# Config (eslint.config.js)
import vercelAI from 'eslint-plugin-vercel-ai-security';
export default [vercelAI.configs.recommended];
# Run
npx eslint .
Quick Install
๐ฆ npm: eslint-plugin-vercel-ai-security ๐ Full Rule List ๐ OWASP LLM Mapping
The Interlace ESLint Ecosystem Interlace is a high-fidelity suite of static code analyzers designed to automate security, performance, and reliability for the modern Node.js stack. With over 330 rules across 18 specialized plugins, it provides 100% coverage for OWASP Top 10, LLM Security, and Database Hardening.
Explore the full Documentation
ยฉ 2026 Ofri Peretz. All rights reserved.
Build Securely. I'm Ofri Peretz, a Security Engineering Leader and the architect of the Interlace Ecosystem. I build static analysis standards that automate security and performance for Node.js fleets at scale.