Vercel AI SDK Adapter
Meloqui provides first-class integration with the Vercel AI SDK, allowing you to use Meloqui's multi-provider ChatClient in Next.js applications with useChat and useCompletion hooks.
Quick Start
// app/api/chat/route.ts
import { ChatClient, streamChat } from 'meloqui';
const client = new ChatClient({
provider: 'openai',
model: 'gpt-4o'
});
export async function POST(request: Request) {
return streamChat(request, client);
}
export const runtime = 'edge';That's it! Your endpoint now works with Vercel AI SDK's useChat hook.
Installation
Meloqui includes the Vercel adapter with no additional dependencies:
npm install meloquiAPI Reference
streamChat
The high-level helper for Next.js API routes. Handles request parsing, message extraction, and response streaming.
import { ChatClient, streamChat } from 'meloqui';
const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });
export async function POST(request: Request) {
return streamChat(request, client, {
temperature: 0.7,
maxTokens: 1000
});
}Parameters:
| Parameter | Type | Description |
|---|---|---|
request | Request | Incoming request from Next.js |
client | ChatClient | Configured Meloqui client |
options | ChatOptions & VercelAdapterOptions | Optional chat and adapter options |
streamCompletion
For useCompletion hook support, use streamCompletion:
// app/api/completion/route.ts
import { ChatClient, streamCompletion } from 'meloqui';
const client = new ChatClient({
provider: 'openai',
model: 'gpt-4o'
});
export async function POST(request: Request) {
return streamCompletion(request, client, {
temperature: 0.9
});
}
export const runtime = 'edge';Parameters:
| Parameter | Type | Description |
|---|---|---|
request | Request | Incoming request from Next.js |
client | ChatClient | Configured Meloqui client |
options | ChatOptions & VercelAdapterOptions | Optional chat and adapter options |
toVercelResponse
For more control, use toVercelResponse directly with a Meloqui stream:
import { ChatClient, toVercelResponse } from 'meloqui';
const client = new ChatClient({ provider: 'anthropic', model: 'claude-sonnet-4-20250514' });
export async function POST(request: Request) {
const { messages } = await request.json();
const lastMessage = messages.filter((m: any) => m.role === 'user').pop();
const stream = client.stream(lastMessage.content, {
systemPrompt: 'You are a helpful assistant.'
});
return toVercelResponse(stream);
}toVercelStream
For full control over the Response, use toVercelStream to get a ReadableStream:
import { ChatClient, toVercelStream, getVercelStreamHeaders } from 'meloqui';
const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });
export async function POST(request: Request) {
const stream = client.stream('Hello!');
return new Response(toVercelStream(stream), {
headers: {
...getVercelStreamHeaders(),
'X-Custom-Header': 'value'
}
});
}Structured Output
Meloqui supports streaming structured objects using ai-sdk's streamObject function. This allows you to generate type-safe JSON objects that conform to a schema.
When to Use Structured Output
Use structured output when you need:
- Type-safe responses: Guarantee the LLM returns data matching your schema
- Structured data extraction: Parse information into specific fields
- Form generation: Generate objects for UI forms or configurations
- API responses: Create structured data for downstream processing
Use regular streamChat when you need:
- Free-form text responses
- Conversational interactions
- Creative content generation
Basic Usage
Use toVercelObjectResponse to stream structured objects:
// app/api/generate-person/route.ts
import { streamObject } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { toVercelObjectResponse } from 'meloqui';
import { z } from 'zod';
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const PersonSchema = z.object({
name: z.string().describe('Full name of the person'),
age: z.number().describe('Age in years'),
occupation: z.string().describe('Current job or profession')
});
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = await streamObject({
model: openai('gpt-4o'),
schema: PersonSchema,
prompt
});
return toVercelObjectResponse(result.fullStream);
}
export const runtime = 'edge';toVercelObjectStream
For more control, use toVercelObjectStream directly:
import { streamObject } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { toVercelObjectStream, getVercelStreamHeaders } from 'meloqui';
import { z } from 'zod';
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
export async function POST(request: Request) {
const result = await streamObject({
model: openai('gpt-4o'),
schema: z.object({ title: z.string(), content: z.string() }),
prompt: 'Generate a blog post about TypeScript'
});
return new Response(toVercelObjectStream(result.fullStream), {
headers: {
...getVercelStreamHeaders(),
'X-Custom-Header': 'value'
}
});
}Schema Definition with Zod
Define schemas using Zod for type-safe validation:
import { z } from 'zod';
// Simple object
const ProductSchema = z.object({
name: z.string(),
price: z.number(),
inStock: z.boolean()
});
// Nested objects
const OrderSchema = z.object({
orderId: z.string(),
customer: z.object({
name: z.string(),
email: z.string().email()
}),
items: z.array(z.object({
productId: z.string(),
quantity: z.number().int().positive()
})),
total: z.number()
});
// With descriptions (helps the LLM understand intent)
const RecipeSchema = z.object({
title: z.string().describe('Name of the dish'),
prepTime: z.number().describe('Preparation time in minutes'),
ingredients: z.array(z.string()).describe('List of ingredients'),
steps: z.array(z.string()).describe('Step-by-step cooking instructions')
});Streaming Partial Objects
The stream emits partial objects as they're generated. Each object event contains the current state:
// Stream events:
// { type: 'object', object: { name: 'John' } }
// { type: 'object', object: { name: 'John', age: 30 } }
// { type: 'object', object: { name: 'John', age: 30, occupation: 'Engineer' } }
// { type: 'finish', finishReason: 'stop' }Error Handling for Structured Output
Handle errors with the onError callback:
return toVercelObjectResponse(result.fullStream, {
onError: (error) => {
if (error.message.includes('schema')) {
return 'Failed to generate valid data. Please try again.';
}
return 'An error occurred during generation.';
}
});Frontend Integration
On the frontend, parse the streamed events:
'use client';
import { useState } from 'react';
interface Person {
name?: string;
age?: number;
occupation?: string;
}
export default function GeneratePerson() {
const [person, setPerson] = useState<Person>({});
const [isLoading, setIsLoading] = useState(false);
async function generate() {
setIsLoading(true);
setPerson({});
const response = await fetch('/api/generate-person', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt: 'Generate a random software engineer' })
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
while (reader) {
const { done, value } = await reader.read();
if (done) break;
const text = decoder.decode(value);
const lines = text.split('\n').filter(line => line.startsWith('data: '));
for (const line of lines) {
const data = line.slice(6); // Remove 'data: ' prefix
if (data === '[DONE]') continue;
try {
const event = JSON.parse(data);
if (event.type === 'object') {
setPerson(event.object);
}
} catch {
// Ignore parse errors
}
}
}
setIsLoading(false);
}
return (
<div>
<button onClick={generate} disabled={isLoading}>
{isLoading ? 'Generating...' : 'Generate Person'}
</button>
{person.name && (
<div>
<p><strong>Name:</strong> {person.name}</p>
{person.age && <p><strong>Age:</strong> {person.age}</p>}
{person.occupation && <p><strong>Occupation:</strong> {person.occupation}</p>}
</div>
)}
</div>
);
}Type-Safe Event Handling
Meloqui exports types for consumers who want type-safe event handling when manually parsing streams:
import type { VercelObjectDelta, ObjectStreamPart } from 'meloqui';
interface Person {
name?: string;
age?: number;
occupation?: string;
}
// Type-safe handler for object events
function handleObjectEvent(event: VercelObjectDelta<Person>) {
console.log('Partial object:', event.object);
// TypeScript knows event.object has name?, age?, occupation?
}
// Parse events with full type safety
function parseEvent(data: string): ObjectStreamPart<Person> | null {
try {
return JSON.parse(data) as ObjectStreamPart<Person>;
} catch {
return null;
}
}
// Usage in your stream handler
const event = parseEvent(data);
if (event?.type === 'object') {
handleObjectEvent(event); // Type narrowed to VercelObjectDelta<Person>
}Available types:
| Type | Description |
|---|---|
VercelObjectDelta<T> | Object event with partial object: T |
ObjectStreamPart<T> | Union of all stream event types |
VercelTextDelta | Text chunk event |
VercelFinish | Stream completion event |
VercelError | Error event |
Advanced: Using with ai-sdk's experimental_useObject
For a more integrated experience, use ai-sdk's hooks directly with your meloqui endpoint:
'use client';
import { experimental_useObject as useObject } from 'ai/react';
import { z } from 'zod';
const PersonSchema = z.object({
name: z.string(),
age: z.number(),
occupation: z.string()
});
export default function GeneratePerson() {
const { object, submit, isLoading } = useObject({
api: '/api/generate-person',
schema: PersonSchema
});
return (
<div>
<button onClick={() => submit('Generate a random person')} disabled={isLoading}>
Generate
</button>
{object && (
<pre>{JSON.stringify(object, null, 2)}</pre>
)}
</div>
);
}Note: Check the ai-sdk documentation for the latest on
useObjectavailability and API.
Error Handling
Customize error messages sent to the client:
return streamChat(request, client, {
onError: (error) => {
if (error.message.includes('rate limit')) {
return 'Too many requests. Please try again later.';
}
return 'An error occurred. Please try again.';
}
});Tool Call Support
The adapter supports streaming tool calls in Vercel's format. When a StreamChunk includes tool call or tool result information, it's automatically converted to the appropriate Vercel events:
tool-input-available- Emitted when a tool is called with argumentstool-output-available- Emitted when a tool returns a result
// StreamChunk with tool call
{
content: '',
role: 'assistant',
toolCall: {
toolCallId: 'call_123',
toolName: 'getWeather',
args: { city: 'San Francisco' }
}
}
// Becomes Vercel event:
// data: {"type":"tool-input-available","toolCallId":"call_123","toolName":"getWeather","input":{"city":"San Francisco"}}When tool calls are present in the stream, the finish event will have finishReason: "tool-calls" instead of "stop".
Frontend Usage
useChat
Use with Vercel AI SDK's useChat hook:
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong> {m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
</div>
);
}useCompletion
Use with Vercel AI SDK's useCompletion hook for text completions:
'use client';
import { useCompletion } from 'ai/react';
export default function Completion() {
const { completion, input, handleInputChange, handleSubmit, isLoading } = useCompletion({
api: '/api/completion'
});
return (
<div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Enter a prompt..."
/>
<button type="submit" disabled={isLoading}>
Complete
</button>
</form>
{completion && (
<div>
<strong>Completion:</strong> {completion}
</div>
)}
</div>
);
}Multi-Provider Example
Combine with Meloqui's provider fallback:
import { ChatClient, streamChat } from 'meloqui';
const client = new ChatClient({
provider: 'openai',
model: 'gpt-4o',
fallbackProviders: [
{ provider: 'anthropic', model: 'claude-sonnet-4-20250514' }
]
});
export async function POST(request: Request) {
return streamChat(request, client);
}Edge Runtime
The adapter is fully compatible with Vercel's Edge Runtime:
export const runtime = 'edge';CORS Configuration
If your frontend is on a different origin than your API, you'll need to configure CORS headers:
import { ChatClient, streamChat } from 'meloqui';
const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });
export async function POST(request: Request) {
const response = await streamChat(request, client);
// Add CORS headers
response.headers.set('Access-Control-Allow-Origin', '*');
response.headers.set('Access-Control-Allow-Methods', 'POST, OPTIONS');
response.headers.set('Access-Control-Allow-Headers', 'Content-Type');
return response;
}
export async function OPTIONS() {
return new Response(null, {
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type'
}
});
}For production, replace '*' with your specific domain.
Native Structured Output with ChatClient
Meloqui's ChatClient now supports native structured output streaming through the streamObject method. This provides a unified API for structured output across all providers.
Using ChatClient.streamObject
// app/api/generate/route.ts
import { ChatClient, toVercelNativeObjectResponse } from 'meloqui';
import { z } from 'zod';
const client = new ChatClient({
provider: 'openai',
model: 'gpt-4o'
});
const PersonSchema = z.object({
name: z.string(),
age: z.number(),
occupation: z.string()
});
export async function POST(request: Request) {
const { prompt } = await request.json();
const stream = client.streamObject(prompt, PersonSchema);
return toVercelNativeObjectResponse(stream);
}
export const runtime = 'edge';toVercelNativeObjectStream
For more control, use toVercelNativeObjectStream directly:
import { ChatClient, toVercelNativeObjectStream, getVercelStreamHeaders } from 'meloqui';
import { z } from 'zod';
const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });
export async function POST(request: Request) {
const stream = client.streamObject(
'Generate a blog post about TypeScript',
z.object({
title: z.string(),
content: z.string(),
tags: z.array(z.string())
})
);
return new Response(toVercelNativeObjectStream(stream), {
headers: {
...getVercelStreamHeaders(),
'X-Custom-Header': 'value'
}
});
}Checking Provider Support
Before using streamObject, check if the provider supports it:
const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });
if (client.capabilities.structuredOutput) {
// Use streamObject
const stream = client.streamObject(prompt, schema);
} else {
// Fall back to regular chat
const response = await client.chat(prompt);
}Native vs External streamObject
Meloqui provides two approaches for structured output:
| Approach | Use Case |
|---|---|
ChatClient.streamObject | Unified API across providers, automatic provider selection, integrated with meloqui's features (rate limiting, history, etc.) |
Direct ai-sdk streamObject | Maximum control, direct access to ai-sdk features, provider-specific options |
Use ChatClient.streamObject when you want meloqui to handle provider management. Use direct ai-sdk when you need fine-grained control or provider-specific features.
