Skip to content

Vercel AI SDK Adapter

Meloqui provides first-class integration with the Vercel AI SDK, allowing you to use Meloqui's multi-provider ChatClient in Next.js applications with useChat and useCompletion hooks.

Quick Start

typescript
// app/api/chat/route.ts
import { ChatClient, streamChat } from 'meloqui';

const client = new ChatClient({
  provider: 'openai',
  model: 'gpt-4o'
});

export async function POST(request: Request) {
  return streamChat(request, client);
}

export const runtime = 'edge';

That's it! Your endpoint now works with Vercel AI SDK's useChat hook.

Installation

Meloqui includes the Vercel adapter with no additional dependencies:

bash
npm install meloqui

API Reference

streamChat

The high-level helper for Next.js API routes. Handles request parsing, message extraction, and response streaming.

typescript
import { ChatClient, streamChat } from 'meloqui';

const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });

export async function POST(request: Request) {
  return streamChat(request, client, {
    temperature: 0.7,
    maxTokens: 1000
  });
}

Parameters:

ParameterTypeDescription
requestRequestIncoming request from Next.js
clientChatClientConfigured Meloqui client
optionsChatOptions & VercelAdapterOptionsOptional chat and adapter options

streamCompletion

For useCompletion hook support, use streamCompletion:

typescript
// app/api/completion/route.ts
import { ChatClient, streamCompletion } from 'meloqui';

const client = new ChatClient({
  provider: 'openai',
  model: 'gpt-4o'
});

export async function POST(request: Request) {
  return streamCompletion(request, client, {
    temperature: 0.9
  });
}

export const runtime = 'edge';

Parameters:

ParameterTypeDescription
requestRequestIncoming request from Next.js
clientChatClientConfigured Meloqui client
optionsChatOptions & VercelAdapterOptionsOptional chat and adapter options

toVercelResponse

For more control, use toVercelResponse directly with a Meloqui stream:

typescript
import { ChatClient, toVercelResponse } from 'meloqui';

const client = new ChatClient({ provider: 'anthropic', model: 'claude-sonnet-4-20250514' });

export async function POST(request: Request) {
  const { messages } = await request.json();
  const lastMessage = messages.filter((m: any) => m.role === 'user').pop();

  const stream = client.stream(lastMessage.content, {
    systemPrompt: 'You are a helpful assistant.'
  });

  return toVercelResponse(stream);
}

toVercelStream

For full control over the Response, use toVercelStream to get a ReadableStream:

typescript
import { ChatClient, toVercelStream, getVercelStreamHeaders } from 'meloqui';

const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });

export async function POST(request: Request) {
  const stream = client.stream('Hello!');

  return new Response(toVercelStream(stream), {
    headers: {
      ...getVercelStreamHeaders(),
      'X-Custom-Header': 'value'
    }
  });
}

Structured Output

Meloqui supports streaming structured objects using ai-sdk's streamObject function. This allows you to generate type-safe JSON objects that conform to a schema.

When to Use Structured Output

Use structured output when you need:

  • Type-safe responses: Guarantee the LLM returns data matching your schema
  • Structured data extraction: Parse information into specific fields
  • Form generation: Generate objects for UI forms or configurations
  • API responses: Create structured data for downstream processing

Use regular streamChat when you need:

  • Free-form text responses
  • Conversational interactions
  • Creative content generation

Basic Usage

Use toVercelObjectResponse to stream structured objects:

typescript
// app/api/generate-person/route.ts
import { streamObject } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { toVercelObjectResponse } from 'meloqui';
import { z } from 'zod';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });

const PersonSchema = z.object({
  name: z.string().describe('Full name of the person'),
  age: z.number().describe('Age in years'),
  occupation: z.string().describe('Current job or profession')
});

export async function POST(request: Request) {
  const { prompt } = await request.json();

  const result = await streamObject({
    model: openai('gpt-4o'),
    schema: PersonSchema,
    prompt
  });

  return toVercelObjectResponse(result.fullStream);
}

export const runtime = 'edge';

toVercelObjectStream

For more control, use toVercelObjectStream directly:

typescript
import { streamObject } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { toVercelObjectStream, getVercelStreamHeaders } from 'meloqui';
import { z } from 'zod';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });

export async function POST(request: Request) {
  const result = await streamObject({
    model: openai('gpt-4o'),
    schema: z.object({ title: z.string(), content: z.string() }),
    prompt: 'Generate a blog post about TypeScript'
  });

  return new Response(toVercelObjectStream(result.fullStream), {
    headers: {
      ...getVercelStreamHeaders(),
      'X-Custom-Header': 'value'
    }
  });
}

Schema Definition with Zod

Define schemas using Zod for type-safe validation:

typescript
import { z } from 'zod';

// Simple object
const ProductSchema = z.object({
  name: z.string(),
  price: z.number(),
  inStock: z.boolean()
});

// Nested objects
const OrderSchema = z.object({
  orderId: z.string(),
  customer: z.object({
    name: z.string(),
    email: z.string().email()
  }),
  items: z.array(z.object({
    productId: z.string(),
    quantity: z.number().int().positive()
  })),
  total: z.number()
});

// With descriptions (helps the LLM understand intent)
const RecipeSchema = z.object({
  title: z.string().describe('Name of the dish'),
  prepTime: z.number().describe('Preparation time in minutes'),
  ingredients: z.array(z.string()).describe('List of ingredients'),
  steps: z.array(z.string()).describe('Step-by-step cooking instructions')
});

Streaming Partial Objects

The stream emits partial objects as they're generated. Each object event contains the current state:

typescript
// Stream events:
// { type: 'object', object: { name: 'John' } }
// { type: 'object', object: { name: 'John', age: 30 } }
// { type: 'object', object: { name: 'John', age: 30, occupation: 'Engineer' } }
// { type: 'finish', finishReason: 'stop' }

Error Handling for Structured Output

Handle errors with the onError callback:

typescript
return toVercelObjectResponse(result.fullStream, {
  onError: (error) => {
    if (error.message.includes('schema')) {
      return 'Failed to generate valid data. Please try again.';
    }
    return 'An error occurred during generation.';
  }
});

Frontend Integration

On the frontend, parse the streamed events:

tsx
'use client';
import { useState } from 'react';

interface Person {
  name?: string;
  age?: number;
  occupation?: string;
}

export default function GeneratePerson() {
  const [person, setPerson] = useState<Person>({});
  const [isLoading, setIsLoading] = useState(false);

  async function generate() {
    setIsLoading(true);
    setPerson({});

    const response = await fetch('/api/generate-person', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt: 'Generate a random software engineer' })
    });

    const reader = response.body?.getReader();
    const decoder = new TextDecoder();

    while (reader) {
      const { done, value } = await reader.read();
      if (done) break;

      const text = decoder.decode(value);
      const lines = text.split('\n').filter(line => line.startsWith('data: '));

      for (const line of lines) {
        const data = line.slice(6); // Remove 'data: ' prefix
        if (data === '[DONE]') continue;

        try {
          const event = JSON.parse(data);
          if (event.type === 'object') {
            setPerson(event.object);
          }
        } catch {
          // Ignore parse errors
        }
      }
    }

    setIsLoading(false);
  }

  return (
    <div>
      <button onClick={generate} disabled={isLoading}>
        {isLoading ? 'Generating...' : 'Generate Person'}
      </button>
      {person.name && (
        <div>
          <p><strong>Name:</strong> {person.name}</p>
          {person.age && <p><strong>Age:</strong> {person.age}</p>}
          {person.occupation && <p><strong>Occupation:</strong> {person.occupation}</p>}
        </div>
      )}
    </div>
  );
}

Type-Safe Event Handling

Meloqui exports types for consumers who want type-safe event handling when manually parsing streams:

typescript
import type { VercelObjectDelta, ObjectStreamPart } from 'meloqui';

interface Person {
  name?: string;
  age?: number;
  occupation?: string;
}

// Type-safe handler for object events
function handleObjectEvent(event: VercelObjectDelta<Person>) {
  console.log('Partial object:', event.object);
  // TypeScript knows event.object has name?, age?, occupation?
}

// Parse events with full type safety
function parseEvent(data: string): ObjectStreamPart<Person> | null {
  try {
    return JSON.parse(data) as ObjectStreamPart<Person>;
  } catch {
    return null;
  }
}

// Usage in your stream handler
const event = parseEvent(data);
if (event?.type === 'object') {
  handleObjectEvent(event); // Type narrowed to VercelObjectDelta<Person>
}

Available types:

TypeDescription
VercelObjectDelta<T>Object event with partial object: T
ObjectStreamPart<T>Union of all stream event types
VercelTextDeltaText chunk event
VercelFinishStream completion event
VercelErrorError event

Advanced: Using with ai-sdk's experimental_useObject

For a more integrated experience, use ai-sdk's hooks directly with your meloqui endpoint:

tsx
'use client';
import { experimental_useObject as useObject } from 'ai/react';
import { z } from 'zod';

const PersonSchema = z.object({
  name: z.string(),
  age: z.number(),
  occupation: z.string()
});

export default function GeneratePerson() {
  const { object, submit, isLoading } = useObject({
    api: '/api/generate-person',
    schema: PersonSchema
  });

  return (
    <div>
      <button onClick={() => submit('Generate a random person')} disabled={isLoading}>
        Generate
      </button>
      {object && (
        <pre>{JSON.stringify(object, null, 2)}</pre>
      )}
    </div>
  );
}

Note: Check the ai-sdk documentation for the latest on useObject availability and API.

Error Handling

Customize error messages sent to the client:

typescript
return streamChat(request, client, {
  onError: (error) => {
    if (error.message.includes('rate limit')) {
      return 'Too many requests. Please try again later.';
    }
    return 'An error occurred. Please try again.';
  }
});

Tool Call Support

The adapter supports streaming tool calls in Vercel's format. When a StreamChunk includes tool call or tool result information, it's automatically converted to the appropriate Vercel events:

  • tool-input-available - Emitted when a tool is called with arguments
  • tool-output-available - Emitted when a tool returns a result
typescript
// StreamChunk with tool call
{
  content: '',
  role: 'assistant',
  toolCall: {
    toolCallId: 'call_123',
    toolName: 'getWeather',
    args: { city: 'San Francisco' }
  }
}

// Becomes Vercel event:
// data: {"type":"tool-input-available","toolCallId":"call_123","toolName":"getWeather","input":{"city":"San Francisco"}}

When tool calls are present in the stream, the finish event will have finishReason: "tool-calls" instead of "stop".

Frontend Usage

useChat

Use with Vercel AI SDK's useChat hook:

tsx
'use client';
import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}>
          <strong>{m.role}:</strong> {m.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

useCompletion

Use with Vercel AI SDK's useCompletion hook for text completions:

tsx
'use client';
import { useCompletion } from 'ai/react';

export default function Completion() {
  const { completion, input, handleInputChange, handleSubmit, isLoading } = useCompletion({
    api: '/api/completion'
  });

  return (
    <div>
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Enter a prompt..."
        />
        <button type="submit" disabled={isLoading}>
          Complete
        </button>
      </form>
      {completion && (
        <div>
          <strong>Completion:</strong> {completion}
        </div>
      )}
    </div>
  );
}

Multi-Provider Example

Combine with Meloqui's provider fallback:

typescript
import { ChatClient, streamChat } from 'meloqui';

const client = new ChatClient({
  provider: 'openai',
  model: 'gpt-4o',
  fallbackProviders: [
    { provider: 'anthropic', model: 'claude-sonnet-4-20250514' }
  ]
});

export async function POST(request: Request) {
  return streamChat(request, client);
}

Edge Runtime

The adapter is fully compatible with Vercel's Edge Runtime:

typescript
export const runtime = 'edge';

CORS Configuration

If your frontend is on a different origin than your API, you'll need to configure CORS headers:

typescript
import { ChatClient, streamChat } from 'meloqui';

const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });

export async function POST(request: Request) {
  const response = await streamChat(request, client);

  // Add CORS headers
  response.headers.set('Access-Control-Allow-Origin', '*');
  response.headers.set('Access-Control-Allow-Methods', 'POST, OPTIONS');
  response.headers.set('Access-Control-Allow-Headers', 'Content-Type');

  return response;
}

export async function OPTIONS() {
  return new Response(null, {
    headers: {
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Methods': 'POST, OPTIONS',
      'Access-Control-Allow-Headers': 'Content-Type'
    }
  });
}

For production, replace '*' with your specific domain.

Native Structured Output with ChatClient

Meloqui's ChatClient now supports native structured output streaming through the streamObject method. This provides a unified API for structured output across all providers.

Using ChatClient.streamObject

typescript
// app/api/generate/route.ts
import { ChatClient, toVercelNativeObjectResponse } from 'meloqui';
import { z } from 'zod';

const client = new ChatClient({
  provider: 'openai',
  model: 'gpt-4o'
});

const PersonSchema = z.object({
  name: z.string(),
  age: z.number(),
  occupation: z.string()
});

export async function POST(request: Request) {
  const { prompt } = await request.json();

  const stream = client.streamObject(prompt, PersonSchema);

  return toVercelNativeObjectResponse(stream);
}

export const runtime = 'edge';

toVercelNativeObjectStream

For more control, use toVercelNativeObjectStream directly:

typescript
import { ChatClient, toVercelNativeObjectStream, getVercelStreamHeaders } from 'meloqui';
import { z } from 'zod';

const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });

export async function POST(request: Request) {
  const stream = client.streamObject(
    'Generate a blog post about TypeScript',
    z.object({
      title: z.string(),
      content: z.string(),
      tags: z.array(z.string())
    })
  );

  return new Response(toVercelNativeObjectStream(stream), {
    headers: {
      ...getVercelStreamHeaders(),
      'X-Custom-Header': 'value'
    }
  });
}

Checking Provider Support

Before using streamObject, check if the provider supports it:

typescript
const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });

if (client.capabilities.structuredOutput) {
  // Use streamObject
  const stream = client.streamObject(prompt, schema);
} else {
  // Fall back to regular chat
  const response = await client.chat(prompt);
}

Native vs External streamObject

Meloqui provides two approaches for structured output:

ApproachUse Case
ChatClient.streamObjectUnified API across providers, automatic provider selection, integrated with meloqui's features (rate limiting, history, etc.)
Direct ai-sdk streamObjectMaximum control, direct access to ai-sdk features, provider-specific options

Use ChatClient.streamObject when you want meloqui to handle provider management. Use direct ai-sdk when you need fine-grained control or provider-specific features.

Released under the MIT License.