MCP transforms AI agents from conversational interfaces into action-taking systems. Give agents capabilities through standardized tools, orchestrate complex workflows, and build production-ready integrations.
MCP tools represent infrastructure inversion: humans and systems become function calls accessible to AI agents. The agent decides when to invoke capabilities. We provide the infrastructure.
Core insight: Humans as infrastructure, agents as decision-makers. MCP tools are function calls that wrap human capabilities, databases, APIs, and systems. Read the full framework in Philosophy: The Industrialization of Intelligence.
Model Context Protocol (MCP) is an open standard by Anthropic for connecting AI models to external tools, data sources, and capabilities. Instead of hardcoding integrations into every AI app, MCP provides a universal protocol for tool exposure.
Core Concepts:
Why MCP matters: Before MCP, every AI app built custom integrations. With MCP, build the integration once, use it across all MCP-compatible applications. Write an MCP server for your database? Now Claude Code, Vercel AI SDK, and any MCP client can use it.
Give agents access to specific tools. Agent decides when to call them based on user request.
// Vercel AI SDK - Basic tool usage
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Define a simple tool
const weatherTool = tool({
description: 'Get current weather for a location',
parameters: z.object({
location: z.string().describe('City name or zip code'),
units: z.enum(['celsius', 'fahrenheit']).default('fahrenheit'),
}),
execute: async ({ location, units }) => {
// Call weather API
const response = await fetch(
`https://api.weather.com/current?location=${location}&units=${units}`
);
const data = await response.json();
return {
temperature: data.temp,
conditions: data.conditions,
humidity: data.humidity,
};
},
});
// Give agent access to tool
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'What\'s the weather like in San Francisco?',
tools: {
getWeather: weatherTool,
},
maxSteps: 5, // Allow multi-step tool usage
});
console.log(result.text);
// Agent automatically decides to call getWeather tool
// Returns natural language response with weather dataKey insight: You define capabilities (tools), the agent decides when to use them. No explicit control flow. This is the shift from imperative to declarative agent programming.
Agents can chain multiple tool calls to complete complex tasks. Define the tools, agent figures out the execution order.
// Multi-tool agent for customer support
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
// Tool 1: Search customer database
const searchCustomer = tool({
description: 'Search for customer by email or ID',
parameters: z.object({
query: z.string().describe('Email or customer ID'),
}),
execute: async ({ query }) => {
const customer = await db.customers.findFirst({
where: { OR: [{ email: query }, { id: query }] },
});
return customer;
},
});
// Tool 2: Get order history
const getOrderHistory = tool({
description: 'Get order history for a customer',
parameters: z.object({
customerId: z.string().describe('Customer ID'),
}),
execute: async ({ customerId }) => {
const orders = await db.orders.findMany({
where: { customerId },
orderBy: { createdAt: 'desc' },
take: 10,
});
return orders;
},
});
// Tool 3: Issue refund
const issueRefund = tool({
description: 'Issue a refund for an order',
parameters: z.object({
orderId: z.string().describe('Order ID to refund'),
amount: z.number().describe('Refund amount in dollars'),
reason: z.string().describe('Reason for refund'),
}),
execute: async ({ orderId, amount, reason }) => {
const refund = await stripe.refunds.create({
charge: orderId,
amount: amount * 100, // Stripe uses cents
reason,
});
// Log to database
await db.refunds.create({
data: { orderId, amount, reason, stripeRefundId: refund.id },
});
return { success: true, refundId: refund.id };
},
});
// Agent orchestrates tool usage automatically
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Customer john@example.com wants a refund for their most recent order. Issue $29.99 refund for damaged item.',
tools: {
searchCustomer,
getOrderHistory,
issueRefund,
},
maxSteps: 10,
});
// Agent workflow (automatic):
// 1. Calls searchCustomer({ query: 'john@example.com' })
// 2. Calls getOrderHistory({ customerId: '<customer-id>' })
// 3. Calls issueRefund({ orderId: '<order-id>', amount: 29.99, reason: 'damaged item' })
// 4. Returns natural language confirmationProduction consideration: Set \`maxSteps\` to prevent infinite tool loops. Agent can get stuck calling tools repeatedly. Typical range: 5-15 steps depending on workflow complexity.
Connect to existing MCP servers to expose entire systems as tools. No need to build every integration from scratch.
// Example: Using Supabase MCP server
// Configuration in mcp.config.json:
{
"mcpServers": {
"supabase": {
"url": "https://mcp.supabase.com/mcp",
"params": {
"features": "database,docs,debugging,development,functions,storage",
"readonly": false
}
}
}
}
// Vercel AI SDK automatically exposes MCP tools
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Agent now has access to ALL Supabase MCP tools:
// - Query database tables
// - Read Supabase documentation
// - Manage storage buckets
// - Deploy Edge Functions
// - etc.
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Show me all users who signed up in the last 7 days',
// Tools from MCP servers automatically included
maxSteps: 5,
});
// Agent will:
// 1. Use Supabase MCP query tool
// 2. Construct appropriate SQL query
// 3. Return formatted results| MCP Server | Capabilities | Use Cases |
|---|---|---|
| Supabase | Database, auth, storage, functions | Full backend operations |
| GitHub | Repos, issues, PRs, code search | Code management automation |
| Filesystem | Read, write, search files | File operations, code generation |
| Playwright | Browser automation, testing | E2E testing, web scraping |
| shadcn/ui | Component search, examples | UI development assistance |
Generate AI SDK tools automatically from existing systems: OpenAPI specs, databases, internal APIs.
// Automatic tool generation from OpenAPI/Swagger
import { generateToolsFromOpenAPI } from '@ai-sdk/tools';
// Generate tools from OpenAPI spec
const stripeTools = await generateToolsFromOpenAPI({
spec: 'https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.yaml',
operations: [
'create_customer',
'list_customers',
'create_payment_intent',
'list_charges',
],
auth: {
type: 'bearer',
token: process.env.STRIPE_SECRET_KEY,
},
});
// Now use auto-generated tools
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Create a customer for john@example.com and charge them $99',
tools: stripeTools,
maxSteps: 10,
});
// Agent automatically:
// 1. Calls create_customer tool with email
// 2. Calls create_payment_intent with customer ID and amount
// 3. Returns confirmationProduction pattern: Auto-generate tools for internal APIs using OpenAPI specs. One spec = dozens of AI-accessible tools. No manual tool writing required.
Tools fail. APIs timeout. Databases are down. Production tool orchestration requires comprehensive error handling.
// Robust tool error handling
import { tool } from 'ai';
import { z } from 'zod';
const robustTool = tool({
description: 'Production-ready tool with error handling',
parameters: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
try {
// Attempt operation
const result = await externalAPI.query(query);
// Validate result
if (!result || !result.data) {
return {
success: false,
error: 'Invalid response from API',
message: 'The API returned an unexpected format. Please try again.',
};
}
return {
success: true,
data: result.data,
};
} catch (error: any) {
// Log error for monitoring
console.error('Tool execution failed:', error);
// Return structured error to agent
if (error.code === 'TIMEOUT') {
return {
success: false,
error: 'timeout',
message: 'The operation timed out. The service may be experiencing high load.',
retryable: true,
};
}
if (error.code === 'RATE_LIMIT') {
return {
success: false,
error: 'rate_limit',
message: 'Rate limit exceeded. Please wait before retrying.',
retryable: true,
retryAfter: error.retryAfter,
};
}
// Generic error
return {
success: false,
error: 'unknown',
message: `Operation failed: ${error.message}`,
retryable: false,
};
}
},
});
// Agent receives structured error responses and can:
// - Retry on retryable errors
// - Use alternative tools
// - Inform user about failures gracefullyUnhandled errors crash the entire agent workflow. Always catch, structure, and return error information.
Include: success flag, error type, user-friendly message, retryability. Let the agent decide how to handle failures.
Not all tools should be available to all agents. Implement permission boundaries.
// Permission-aware tool selection
function getToolsForUser(userId: string, userRole: string) {
const basicsTools = [searchTool, readTool];
if (userRole === 'admin') {
return [...basicsTools, deleteTool, refundTool, adminTool];
}
if (userRole === 'support') {
return [...basicsTools, refundTool];
}
return basicsTools; // Default: read-only tools
}
// Use role-based tools
const tools = getToolsForUser(user.id, user.role);
const result = await generateText({
model: openai('gpt-4o'),
prompt: userRequest,
tools,
});Always validate tool parameters. Agents can make mistakes or be manipulated.
Log all tool executions for compliance, debugging, and security monitoring.
// Audit logging wrapper
function auditedTool<T>(toolDefinition: any) {
const originalExecute = toolDefinition.execute;
return tool({
...toolDefinition,
execute: async (params: T) => {
const startTime = Date.now();
// Log tool call
await db.auditLog.create({
data: {
toolName: toolDefinition.description,
parameters: params,
userId: context.userId,
timestamp: new Date(),
},
});
try {
const result = await originalExecute(params);
// Log success
await db.auditLog.update({
where: { id: auditLogId },
data: {
success: true,
duration: Date.now() - startTime,
result: result,
},
});
return result;
} catch (error) {
// Log failure
await db.auditLog.update({
where: { id: auditLogId },
data: {
success: false,
error: error.message,
duration: Date.now() - startTime,
},
});
throw error;
}
},
});
}The agent decides when to use tools based on descriptions. Be explicit about what each tool does and when to use it.
Strong typing prevents agent errors and provides clear parameter documentation. Zod schemas are your API contract with the agent.
Prevent infinite loops. Simple tasks: 5 steps. Complex workflows: 10-15 steps. Never unlimited.
Consistent response format helps agents chain tools effectively. Include success flags, data, and error information.
Agents may retry failed operations. Design tools to handle duplicate calls gracefully.
Don't build custom integrations for common services. Use existing MCP servers (Supabase, GitHub, etc.) for instant tool access.
"Get data" doesn't tell the agent when to use this tool. Be specific: "Get customer order history by customer ID."
Unhandled errors crash the entire workflow. Always catch, structure, and return errors.
Delete, refund, and admin tools need explicit confirmation, role checks, and audit logging.
Agent loops can burn through API quotas in seconds. Implement per-user, per-session rate limits.
If an MCP server exists for your service (Supabase, GitHub, etc.), use it. Don't rebuild what's standardized.
Orchestrate multiple agents, each with their own tool access. Hierarchical and parallel coordination strategies.
Manage state across tool calls. Tool results often update agent state or trigger state-dependent workflows.
Implementation guide for Vercel AI SDK tools. Practical examples and production patterns.
Strategic framework: Infrastructure inversion, humans as function calls, and the shift from imperative to declarative programming.