Avolve

Vercel AI SDK 5.0

v5.0.48
Stable

TypeScript toolkit for building AI applications with streaming, tool calling, and agentic control. 2M+ weekly downloads, 100+ models unified.

Stack Compatibility (Oct 2025)
Verified versions tested together in production
DependencyVersionStatus
Next.js15.5.4 Compatible
React19.2.0 Compatible
TypeScript5.9.2 Compatible
Node.js24.8.0 Compatible
Claude Sonnet 4.577.2% SWE-bench Best Coding Model

Getting Started

Install Vercel AI SDK 5.0.48 and start building AI applications with streaming and tool calling.

terminalbash
# Install AI SDK with OpenAI provider
npm install ai @ai-sdk/openai

# Or with Anthropic (Claude)
npm install ai @ai-sdk/anthropic

# Or with Google (Gemini)
npm install ai @ai-sdk/google

Integration Patterns

Streaming Chat with Next.js 15

Server Actions + React 19 streaming for real-time AI responses.

app/api/chat/route.tstypescript
1import { openai } from '@ai-sdk/openai';
2import { streamText } from 'ai';
3
4export async function POST(req: Request) {
5 const { messages } = await req.json();
6
7 const result = streamText({
8 model: openai('gpt-4'),
9 messages,
10 });
11
12 return result.toDataStreamResponse();
13}
app/components/chat-interface.tsxtypescript
1'use client'
2
3import { useChat } from 'ai/react';
4
5export function ChatInterface() {
6 const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
7 api: '/api/chat',
8 });
9
10 return (
11 <div>
12 {messages.map(m => (
13 <div key={m.id}>
14 <strong>{m.role}:</strong> {m.content}
15 </div>
16 ))}
17
18 <form onSubmit={handleSubmit}>
19 <input
20 value={input}
21 onChange={handleInputChange}
22 disabled={isLoading}
23 />
24 <button type="submit" disabled={isLoading}>
25 Send
26 </button>
27 </form>
28 </div>
29 );
30}

Tool Calling with Validation

AI can call TypeScript functions with automatic validation using Zod schemas.

app/actions.tstypescript
1import { openai } from '@ai-sdk/openai';
2import { generateText, tool } from 'ai';
3import { z } from 'zod';
4
5const result = await generateText({
6 model: openai('gpt-4'),
7 prompt: 'What is the weather in San Francisco?',
8 tools: {
9 getWeather: tool({
10 description: 'Get the current weather for a location',
11 parameters: z.object({
12 location: z.string().describe('City name'),
13 unit: z.enum(['celsius', 'fahrenheit']).default('celsius'),
14 }),
15 execute: async ({ location, unit }) => {
16 // Call weather API
17 const weather = await fetchWeather(location, unit);
18 return {
19 location,
20 temperature: weather.temp,
21 conditions: weather.conditions,
22 };
23 },
24 }),
25 },
26});
27
28console.log(result.text); // "The weather in San Francisco is 18°C and sunny"

Agentic Control with stopWhen

Precise control over multi-step AI workflows with dynamic stopping conditions.

app/agents/data-analyst.tstypescript
1import { Agent } from 'ai';
2import { anthropic } from '@ai-sdk/anthropic';
3
4const agent = new Agent({
5 model: anthropic('claude-sonnet-4.5'),
6 systemPrompt: 'You are a data analysis expert',
7 tools: {
8 analyzeData: tool({ /* ... */ }),
9 createVisualization: tool({ /* ... */ }),
10 generateReport: tool({ /* ... */ }),
11 },
12
13 // Stop when conditions are met
14 stopWhen: (step) =>
15 step.stepCount >= 5 ||
16 step.hasToolCall('generateReport') ||
17 step.context.confidence > 0.95,
18
19 // Dynamic step control
20 prepareStep: (step) => ({
21 model: step.context.complexity > 0.8
22 ? anthropic('claude-sonnet-4.5')
23 : anthropic('claude-haiku-3.5'),
24 tools: selectToolsBasedOnContext(step.context),
25 }),
26});
27
28const result = await agent.run({ input: 'Analyze Q3 sales data' });

What Breaks in Production

Real issues we've encountered with Vercel AI SDK and how to fix them.

Streaming breaks with middleware edge runtime

Symptom: StreamingTextResponse closes immediately in Edge runtime

Cause: Edge middleware buffering conflicts with streaming responses

Fix: Use Node.js runtime for streaming routes or skip middleware

// ❌ Wrong - Edge runtime breaks streaming
export const runtime = 'edge'; // Don't use with streaming!

// ✅ Right - Use Node.js runtime for streaming
export const runtime = 'nodejs';

// OR skip middleware for streaming routes
// middleware.ts
export function middleware(request: NextRequest) {
  if (request.nextUrl.pathname.startsWith('/api/chat')) {
    return; // Skip middleware for streaming
  }
  // ... other middleware logic
}
Tool calling rate limits hit unexpectedly

Symptom: Rate limit errors even with low traffic

Cause: Each tool call counts as separate API request, multi-step agents multiply quickly

Fix: Use stopWhen to limit steps and implement exponential backoff

// ✅ Limit agent steps to prevent rate limit hits
const agent = new Agent({
  model: openai('gpt-4'),
  tools: myTools,

  // Prevent runaway tool calling
  stopWhen: (step) =>
    step.stepCount >= 5 || // Max 5 steps
    step.hasToolCall('finalAction') ||
    step.totalTokens > 10000, // Token budget

  // Exponential backoff on rate limits
  onError: async (error, retry) => {
    if (error.status === 429) {
      await new Promise(r => setTimeout(r, Math.pow(2, retry) * 1000));
      return 'retry';
    }
    return 'throw';
  },
});
Token counting mismatch between providers

Symptom: AI SDK token counts don't match provider billing

Cause: Different tokenizers (GPT-4 uses tiktoken, Claude uses own tokenizer)

Fix: Use provider-specific token counters and add buffer for safety

// ✅ Use provider-specific token counting
import { encoding_for_model } from '@anthropic-ai/tokenizer';

// For Claude
const claudeTokens = encoding_for_model('claude-3-sonnet').encode(text).length;

// For OpenAI
import { encode } from 'gpt-tokenizer';
const openaiTokens = encode(text).length;

// Always add 10% buffer for safety
const maxTokens = Math.floor(providerLimit * 0.9);