Skip to content

Codestral Assistant

A CLI-based coding assistant using Mistral AI with built-in code search (RAG).

Features

  • Mistral Integration - Uses mistral-small-latest for chat with tool support
  • Code Search (RAG) - Index your codebase and ask questions about it
  • File Context - Read local files into the chat context with /read
  • File Writing - AI can create and modify files with your approval
  • Streaming - Real-time code generation
  • Persistence - Saves conversation history

Quick Start

bash
# Get key from https://console.mistral.ai/
export MISTRAL_API_KEY=your_key

# Run the assistant
npm run recipe:codestral

How It Works

This recipe demonstrates how to use Meloqui's native Mistral provider with RAG (Retrieval-Augmented Generation) for code search. It uses bundled local embeddings that work without external services.

Connection Setup

We connect to Mistral's API using the native mistral provider with document store for RAG:

typescript
// Create the Codestral assistant
export const assistant = createAssistant({
  name: 'Codestral',
  provider: 'mistral',
  model: MODEL,
  apiKeyEnvVar: 'MISTRAL_API_KEY',
  systemPrompt: SYSTEM_PROMPT,
  banner: generateBanner({
    name: 'Codestral',
    tagline: 'Powered by Mistral AI',
    lines: [
      "Type '/tree' to see files",
      "Type '/index [path]' to index codebase",
      "Type '/read <files>' to analyze files",
      "Type 'help' for all commands"
    ]
  })
  // Note: assistant-core provides sensible defaults for index/agent config
});

The /read Command

The assistant intercepts user input to handle the /read command, loading file content before sending it to the model:

typescript
if (trimmed.startsWith('/read ')) {
  const patterns = trimmed.substring(6).trim().split(/\s+/);
  const filesToRead = expandGlobs(patterns);

  for (const filePath of filesToRead) {
    const content = readFileContent(filePath);
    if (content) {
      fileContents.push({ path: filePath, content });
    }
  }

  const message = `I am sharing the content of ${fileContents.length} file(s)...`;
  // Send to LLM with file context...
}

The /index Command

Index your codebase for semantic code search:

typescript
if (trimmed === '/index' || trimmed.startsWith('/index ')) {
  const files = findFiles(indexPath);
  const docs = files.map(file => ({
    content: readFileContent(file),
    source: path.relative(process.cwd(), file)
  }));

  await documentStore.addDocuments(docs);
  // Files are now searchable via RAG
}

Architecture

Commands

CommandDescription
/index [path]Index files for code search (defaults to src/ or .)
/read <files>Read file(s) into context (supports globs)
/treeShow project structure
historyView conversation history
clearClear conversation history
helpShow all commands
exitQuit the assistant

System Requirements

ResourceRequirement
Memory~500MB during embedding generation
Disk~80MB for model (cached in ~/.cache/transformers)
First run30-60s model download (one-time)

Customization

Change the System Prompt

You can customize the assistant's personality or coding style guidelines in lib.ts:

typescript
export const SYSTEM_PROMPT = `You are a helpful coding assistant.
- Always use TypeScript
- Prefer functional programming patterns
- Be concise`;

Use Ollama for Embeddings

For faster embeddings, use a local Ollama instance:

bash
export OLLAMA_URL=http://localhost:11434
ollama pull nomic-embed-text

Add More Commands

You could add commands like /search to query the indexed codebase, or /review to run a specific review prompt on a file.

Full Source

View on GitHub

Released under the MIT License.