What is Meloqui?
Meloqui is an enterprise-ready, multi-provider SDK for integrating Large Language Models (LLMs) into your applications. It provides a unified interface for OpenAI, Anthropic, Google, and local Ollama models, handling the complexities of each provider's API differences.
System Architecture
Meloqui is built with a layered architecture to ensure reliability, extensibility, and type safety.
Key Components
- ChatClient: The main entry point. It provides a simple, unified API (
chat,stream) that looks the same regardless of the underlying provider. - Middleware Pipeline: Handles cross-cutting concerns:
- History Manager: Automatically tracks conversation context.
- Rate Limiter: Token-bucket algorithm to respect API limits locally.
- Retry Manager: Exponential backoff for transient failures (429, 5xx).
- Provider Layer: Adapters that translate the unified
Messageformat into provider-specific payloads (e.g., Anthropic'smessagesvs OpenAI'schat/completions).
Why Meloqui?
- Vendor Lock-in Free: Switch from GPT-4o to Claude 3.5 to Llama 3.2 with one line of code.
- Production Grade: Built-in retries, logging, and rate limiting means you don't have to reinvent the wheel.
- Developer Experience: First-class TypeScript support with full type definitions.
