System Prompts
Meloqui uses a layered system prompt architecture that adapts to different models and providers. This guide explains how prompts are structured and how to configure them for optimal tool-calling behavior.
Why This Matters
Smaller or weaker models often struggle with tool calling. They may:
- Describe actions instead of calling tools
- Confuse similar tools (e.g.,
search_filesvsrun_typecheck) - Output markdown instead of JSON
- Lose the tool-calling pattern after a few turns
The system prompt architecture addresses these issues with model-specific configurations.
Architecture Overview
┌─────────────────────────────────────────────────────────────────┐
│ Model-Specific Config │
│ (overrides, prepends, appends, examples, Ollama options) │
├─────────────────────────────────────────────────────────────────┤
│ Provider Additions │
│ (local privacy messaging for ollama/docker) │
├─────────────────────────────────────────────────────────────────┤
│ Base System Prompt │
│ (core assistant identity, tool usage rules) │
└─────────────────────────────────────────────────────────────────┘Configuration via Config File
You can customize the system prompt in your .meloquirc.yaml file. This supports simple string replacement, modifiers for fine-grained control, and per-profile customization.
Simple String Replacement
Replace the entire base prompt with your own:
# .meloquirc.yaml
systemPrompt: "You are a helpful coding assistant for Python projects."Note: String configs replace only the base prompt. Tool descriptions and working directory context are still included automatically. To disable these, use a modifiers object with
includeflags.
Modifiers Object
Use modifiers to prepend, append, or add instructions without replacing the entire prompt:
# .meloquirc.yaml
systemPrompt:
prepend: |
IMPORTANT: This project uses TypeScript strict mode.
All code must pass strict type checking.
append: |
Remember: This is a monorepo using pnpm workspaces.
instructions:
- "Prefer functional programming patterns"
- "Always suggest unit tests for new functions"
- "Use async/await instead of callbacks"Complete Override
Ignore the base prompt entirely and use your own:
# .meloquirc.yaml
systemPrompt:
override: |
You are a specialized SQL assistant. You help users write
efficient database queries and optimize schema designs.
Focus only on SQL-related tasks.Include Flags
Control which automatic additions are included in the system prompt:
# .meloquirc.yaml
systemPrompt:
prepend: "Focus on security best practices."
include:
toolDescriptions: true # Include tool descriptions (default: true)
workingDirectory: true # Include CWD context (default: true)Set flags to false to create minimal prompts:
# Minimal prompt without automatic additions
systemPrompt:
override: "You are a concise assistant. Be brief."
include:
toolDescriptions: false
workingDirectory: falsePer-Profile Prompts
Define different system prompts for different profiles:
# .meloquirc.yaml
provider: anthropic
model: claude-sonnet-4-20250514
profiles:
python:
provider: anthropic
model: claude-sonnet-4-20250514
systemPrompt:
prepend: "This is a Python-only project using Poetry."
instructions:
- "Use type hints for all functions"
- "Follow PEP 8 style guidelines"
- "Prefer pathlib over os.path"
frontend:
provider: openai
model: gpt-4o
systemPrompt:
prepend: "This is a React/TypeScript frontend project."
instructions:
- "Use functional components with hooks"
- "Prefer CSS modules for styling"
minimal:
systemPrompt:
override: "You are a concise assistant. Be brief."
include:
toolDescriptions: falseUse profiles with the --profile flag:
melo --profile python
melo --profile frontend
melo --profile minimalCombining Top-Level and Profile Prompts
When a profile has a systemPrompt, it's merged with the top-level systemPrompt:
# Top-level applies to all profiles
systemPrompt:
prepend: "Always be concise."
profiles:
python:
systemPrompt:
# This replaces the top-level prepend
prepend: "This is a Python project."
instructions:
- "Use type hints"Profile settings take precedence over top-level settings for the same modifier type.
Base Prompt
The core prompt is minimal and provider-agnostic:
const BASE_PROMPT = `You are a helpful coding assistant.
- Keep responses concise for CLI output.
- Prefer TypeScript/JavaScript unless asked otherwise.
- When user asks to CREATE/WRITE/SAVE a file, use the write_file tool.
- Only show code when explicitly asked to SEE/REVIEW.`;Creating System Prompts
Use createSystemPrompt() to build prompts with provider additions and tool descriptions:
import { createSystemPrompt } from '@meloqui/assistant-core';
const prompt = createSystemPrompt({
provider: 'ollama',
cwd: '/path/to/project',
tools: [
{ name: 'read_file', description: 'Read a file' },
{ name: 'write_file', description: 'Write a file' }
],
additionalInstructions: 'Focus on TypeScript best practices.'
});Options
| Option | Type | Description |
|---|---|---|
provider | string | Provider name (ollama, openai, etc.) |
isLocal | boolean | Whether the provider runs locally |
includeTools | boolean | Include tool descriptions (default: true) |
tools | ToolInfo[] | Tools to generate descriptions for |
additionalInstructions | string | Extra instructions to append |
customBase | string | Replace the base prompt entirely |
cwd | string | Working directory for file operations |
Model-Specific Configurations
For models with known quirks, Meloqui provides prompt configs that modify behavior:
Modifier Types
| Type | Effect |
|---|---|
override | Replace entire system prompt |
prepend | Add text before base prompt |
append | Add text after base prompt |
Example: Mistral Tiny
Mistral Tiny confuses search_files with run_typecheck when the search query contains tool names:
export const mistralTinyConfig: ModelPromptConfig = {
pattern: 'mistral-tiny*',
description: 'Mistral Tiny needs explicit tool disambiguation',
systemPrompt: {
type: 'override',
prompt: `You are a coding assistant. Output tool calls as JSON.
CRITICAL DISAMBIGUATION:
- "Search for" / "Find text" => search_files (searches FILE CONTENTS)
- "Run typecheck" / "Check types" => run_typecheck (RUNS TypeScript)
Output format: {"tool":"<name>","args":{...}}
If tools disabled (TOOLS=OFF): output FALLBACK`
}
};Registered Model Configs
| Pattern | Known Issue |
|---|---|
smollm2* | Very small, needs strict JSON contract |
smollm3* | Reasoning model, needs num_predict limit |
qwen2.5* | Needs TOOLS=ON/OFF prefix detection |
qwen3:1.7b* | Heavy reasoning overhead |
gemma3* | High context capacity (32k safe) |
llama3.2* | Safe for 1b/3b sizes |
phi4* | Microsoft model, 16k context |
granite3* | IBM Granite, 32k context |
mistral-tiny* | Confuses search with run commands |
mistral-small* | Sometimes skips tool calls |
open-mistral-nemo* | Same as mistral-tiny |
Using Prompt Configs
import { applyPromptConfig, getModelOptions } from '@meloqui/assistant-core';
// Get modified prompts for a specific model
const { systemPrompt, userMessage, appliedConfig } = applyPromptConfig(
'mistral-tiny-latest',
baseSystemPrompt,
userMessage
);
// Get Ollama options if configured
const options = getModelOptions('qwen3:1.7b');
// => { num_predict: 2048, num_ctx: 8192 }Tool Calling Contract
For models that struggle with tool calling, use a strict JSON contract:
## Tool calling contract (VERY IMPORTANT)
When you decide to use a tool, you MUST output EXACTLY ONE line of valid JSON:
{"tool":"<tool_name>","args":{...}}
No markdown. No code fences. No commentary. No extra keys. No surrounding text.
If you cannot or should not use a tool, output EXACTLY ONE line:
FALLBACKOllama-Specific Options
Some models need runtime options for optimal performance:
interface ModelPromptConfig {
// ...
options?: {
num_ctx?: number; // Context window (tokens)
num_predict?: number; // Max generation length
}
}Context Size Recommendations
| Model | num_ctx | Max | Notes |
|---|---|---|---|
| smollm2 | 2,048 | 2,048 | Limited by size |
| smollm3 | 16,384 | 65,536 | Reasoning model |
| gemma3 | 32,768 | 131,072 | High capacity |
| qwen2.5* | 8,192 | 32,768 | Includes coder |
| qwen3 | 8,192 | 32,768 | Reasoning model |
| llama3.2 | 8,192 | 131,072 | Safe for 1b/3b |
| phi4 | 16,384 | 16,384 | Phi-4 mini |
| granite3 | 32,768 | 131,072 | IBM Granite |
Troubleshooting
Model describes actions but doesn't call tools
Symptom: Logs show toolCallCount: 0 but the model says "I created the file"
Cause: Weak model loses the tool-calling pattern
Fix: Use an override prompt with strict JSON contract
Wrong tool called
Symptom: User says "search for run_typecheck" but model calls run_typecheck
Cause: Query contains a tool name, confusing the model
Fix: Add disambiguation examples in the prompt
Model outputs markdown instead of JSON
Symptom: Tool calls wrapped in ```json code blocks
Cause: Model following chat patterns instead of tool contract
Fix: Explicitly forbid markdown in the prompt
toolCallCount drops to 0 after initial success
Symptom: First few turns use tools, then model stops
Cause: Model loses context or gets confused
Fix: Keep prompts short, add tool reminders, consider a larger model
Custom Configurations
Register your own model config:
import { registerPromptConfig } from '@meloqui/assistant-core';
registerPromptConfig({
pattern: 'my-custom-model*',
description: 'My custom model needs special handling',
systemPrompt: {
type: 'prepend',
text: 'IMPORTANT: Always use tools when asked to perform actions.\n\n'
},
options: {
num_ctx: 4096
}
});API Reference
createSystemPrompt(options)
Creates a system prompt with provider additions and tool descriptions.
applyPromptConfig(modelName, systemPrompt, userMessage)
Applies model-specific modifications to prompts.
getModelOptions(modelName)
Returns Ollama-specific options for a model (e.g., num_ctx, num_predict).
registerPromptConfig(config)
Registers a custom model configuration.
listPromptConfigs()
Returns all registered model configurations.
buildSystemPrompt(options)
Builds a complete system prompt from configuration and runtime context.
import { buildSystemPrompt } from '@meloqui/assistant-core';
const { prompt, include } = buildSystemPrompt({
config: userConfig.systemPrompt, // string or modifier object
basePrompt: 'You are a helpful assistant.',
toolDescriptions: '## Available Tools\n...',
workingDirectory: '/path/to/project',
currentTime: '2024-01-15 10:30:00',
additionalContext: '## RAG Context\n...'
});resolveSystemPromptToString(config, basePrompt?)
Converts a SystemPromptConfig (string or modifier object) to a simple string. Useful for backwards compatibility with APIs that expect a string.
import { resolveSystemPromptToString } from '@meloqui/assistant-core';
const prompt = resolveSystemPromptToString(config.systemPrompt);
// Returns string | undefinedmergeSystemPromptConfigs(base, override)
Merges two system prompt configs, with the override taking precedence.
import { mergeSystemPromptConfigs } from '@meloqui/assistant-core';
const merged = mergeSystemPromptConfigs(
topLevelConfig.systemPrompt,
profileConfig.systemPrompt
);getSystemPromptSummary(config, maxLength?)
Returns a display-friendly summary of a system prompt config, truncated to maxLength (default: 50).
import { getSystemPromptSummary } from '@meloqui/assistant-core';
const summary = getSystemPromptSummary(config.systemPrompt);
// "You are a helpful..." or "[modifiers: prepend, 3 instructions]"