Skip to content

Streaming

Meloqui supports real-time response streaming using standard AsyncIterators. This allows you to display partial responses to users as they are generated, improving perceived latency.

typescript
const client = new ChatClient({ provider: 'openai', model: 'gpt-4o' });

// Returns an async iterator
const stream = client.stream('Write a long poem about the sea.');

for await (const chunk of stream) {
  // chunks are small text fragments
  process.stdout.write(chunk.content);
}

Streaming is supported across all providers (OpenAI, Anthropic, Google, Ollama).

Error Handling

When using retryConfig, retry logic only applies to stream creation failures. Once streaming begins, errors during iteration (e.g., network disconnection mid-stream) are not automatically retried. Handle these in your iteration loop:

typescript
try {
  for await (const chunk of stream) {
    process.stdout.write(chunk.content);
  }
} catch (error) {
  // Handle mid-stream errors (network issues, etc.)
  console.error('Stream error:', error);
}

Released under the MIT License.