Overview
Use aiGenerateText() and aiGenerateObject() from @/lib/ai for all AI operations. The utilities handle logging, error handling, and model selection automatically.
What it is
Wrapper utilities around Vercel AI SDK that provide consistent patterns for text generation, structured output, and streaming.
Why we use it
Centralized AI access with automatic logging, type-safe structured output via Zod, and easy model switching through AI Gateway.
When to use
Content generation, data extraction, document parsing, chat interfaces, or any task requiring LLM capabilities.
Key Features
- aiGenerateText() for text generation with logging
- aiGenerateObject() for structured output with Zod
- aiStreamText() for real-time chat interfaces
- parseDocumentWithAI() for PDF/DOCX analysis
Quick Start
Basic AI Operations
Text generation and structured output examples.
// Basic text generation
import { aiGenerateText } from '@/lib/ai';
const result = await aiGenerateText({
prompt: 'Summarize this feedback in 2 sentences...',
});
console.log(result.text);
// Structured output with Zod
import { aiGenerateObject } from '@/lib/ai';
import { z } from 'zod';
const InsightSchema = z.object({
category: z.enum(['bug', 'feature', 'improvement']),
priority: z.enum(['low', 'medium', 'high']),
summary: z.string(),
});
const result = await aiGenerateObject({
prompt: 'Extract insight from this feedback...',
schema: InsightSchema,
});
console.log(result.object); // Typed!Patterns
Model Selection
Available models and when to use each.
// Available models via AI Gateway
import { AI_MODELS } from '@/lib/ai';
// DEFAULT / FAST: Quick tasks, chat responses
AI_MODELS.DEFAULT // gpt-5.1-instant (recommended)
AI_MODELS.FAST // gpt-5-nano (cheapest)
// POWERFUL: Complex reasoning, analysis
AI_MODELS.POWERFUL // gpt-5-pro
// REASONING: Chain-of-thought, problem solving
AI_MODELS.REASONING // gpt-5.1-thinking
// CODE: Code generation, review
AI_MODELS.CODE // gpt-5.1-codex
AI_MODELS.CODE_MAX // gpt-5.1-codex-max (large context)Structured Output
Type-safe data extraction with Zod schemas.
// Structured output with schema validation
import { aiGenerateObject } from '@/lib/ai';
import { z } from 'zod';
const AnalysisSchema = z.object({
sentiment: z.enum(['positive', 'neutral', 'negative']),
topics: z.array(z.string()),
actionItems: z.array(z.object({
task: z.string(),
priority: z.enum(['low', 'medium', 'high']),
})),
confidence: z.number().min(0).max(1),
});
const result = await aiGenerateObject({
prompt: `Analyze this customer feedback: ${feedback}`,
schema: AnalysisSchema,
schemaName: 'FeedbackAnalysis',
schemaDescription: 'Structured analysis of customer feedback',
model: 'gpt-5.1-instant', // Optional, defaults to instant
});
// result.object is fully typed as z.infer<typeof AnalysisSchema>
const { sentiment, topics, actionItems } = result.object;Tip: Always use aiGenerateObject when you need structured data. The Zod schema validates the output automatically.
Streaming Responses
Real-time text generation for chat UIs.
// Streaming for chat interfaces
import { aiStreamText } from '@/lib/ai';
export async function POST(request: Request) {
const { messages } = await request.json();
const result = await aiStreamText({
messages,
system: 'You are a helpful assistant.',
});
// Return as streaming response
return result.toDataStreamResponse();
}
// Or consume the stream manually
const result = await aiStreamText({ prompt: '...' });
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}Watch Out
Hardcoding model IDs instead of using constants
Don't
// Hardcoding model IDs
const result = await generateText({
model: openai('gpt-4'),
prompt: '...',
});Do
// Use AI_MODELS constants
import { aiGenerateText, AI_MODELS } from '@/lib/ai';
const result = await aiGenerateText({
prompt: '...',
model: AI_MODELS.POWERFUL, // For complex tasks
});- Missing error handling for AI failures
- Using text generation when structured output is needed
- Using expensive models for simple tasks