Skip to content

Reliability

Network requests fail. APIs get rate limited. LLM calls can hang. Meloqui handles this automatically.

Request Timeouts

Prevent requests from hanging indefinitely:

typescript
// Default timeout for all requests
const client = new ChatClient({
  defaultTimeout: 30000  // 30 seconds
});

// Per-request timeout (overrides default)
const response = await client.chat('Hello', { timeout: 10000 });

When a timeout occurs, a TimeoutError is thrown.

Note: For streaming requests, timeout applies to stream setup only. Once chunks start flowing, there's no timeout enforcement. Handle long-running streams with your own logic if needed.

Retry Configuration

Configure the exponential backoff strategy:

typescript
const client = new ChatClient({
  retryConfig: {
    maxAttempts: 5,           // Try up to 5 times
    initialBackoffMs: 1000,   // Wait 1s first
    maxBackoffMs: 30000,      // Max wait 30s
    backoffMultiplier: 1.5    // Increase wait by 50% each time
  }
});

Retries happen automatically on:

  • 429 (Rate Limit)
  • 5xx (Server Error)
  • Network timeouts

Streaming Limitation

Note: For streaming requests (client.stream()), retry only covers failures during stream creation. If a stream fails mid-way through (e.g., network disconnection after receiving partial response), no retry occurs. The caller must handle mid-stream errors in their iteration loop.

Released under the MIT License.