Skip to content

What is Meloqui?

Meloqui is an enterprise-ready, multi-provider SDK for integrating Large Language Models (LLMs) into your applications. It provides a unified interface for OpenAI, Anthropic, Google, and local Ollama models, handling the complexities of each provider's API differences.

System Architecture

Meloqui is built with a layered architecture to ensure reliability, extensibility, and type safety.

System Architecture

Key Components

  1. ChatClient: The main entry point. It provides a simple, unified API (chat, stream) that looks the same regardless of the underlying provider.
  2. Middleware Pipeline: Handles cross-cutting concerns:
    • History Manager: Automatically tracks conversation context.
    • Rate Limiter: Token-bucket algorithm to respect API limits locally.
    • Retry Manager: Exponential backoff for transient failures (429, 5xx).
  3. Provider Layer: Adapters that translate the unified Message format into provider-specific payloads (e.g., Anthropic's messages vs OpenAI's chat/completions).

Why Meloqui?

  • Vendor Lock-in Free: Switch from GPT-4o to Claude 3.5 to Llama 3.2 with one line of code.
  • Production Grade: Built-in retries, logging, and rate limiting means you don't have to reinvent the wheel.
  • Developer Experience: First-class TypeScript support with full type definitions.

Released under the MIT License.