Security Policy
Supported Versions
We take security seriously and aim to provide security updates for the following versions:
| Version |
Supported |
| 0.1.x |
:white_check_mark: |
| < 0.1 |
:x: |
Reporting a Vulnerability
Please do not report security vulnerabilities through public GitHub issues.
If you discover a security vulnerability in meloqui, please report it privately to help us address it before public disclosure.
How to Report
- GitHub Security Advisories (Recommended): Use GitHub’s private vulnerability reporting
- Go to the Security tab of this repository
- Click “Report a vulnerability”
- Fill out the advisory form with details
- Include in your report:
- Description of the vulnerability
- Steps to reproduce the issue
- Potential impact
- Suggested fix (if any)
- Your contact information
- What to expect:
- Initial Response: Within 48 hours acknowledging receipt
- Assessment: We’ll investigate and assess the severity within 5 business days
- Updates: Regular updates on our progress
- Resolution: Timeline for fix and public disclosure
- Credit: Recognition in release notes (if desired)
Responsible Disclosure
We follow responsible disclosure practices:
- Private reporting - Report vulnerabilities privately first
- Coordinated disclosure - Work with us on timing of public disclosure
- Credit - We’ll credit you for the discovery (unless you prefer anonymity)
- No retaliation - We won’t take legal action for good-faith security research
Security Best Practices
When using meloqui in your applications, follow these security best practices:
API Key Management
Never hardcode API keys in your source code
❌ Bad:
const client = new ChatClient({
provider: 'openai',
apiKey: 'sk-1234567890abcdef' // Never do this!
});
✅ Good:
// Use environment variables
const client = new ChatClient({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY
});
Recommendations:
- Store API keys in environment variables
- Use secret management services (AWS Secrets Manager, HashiCorp Vault, etc.)
- Never commit
.env files to version control
- Rotate API keys regularly
- Use different keys for development, staging, and production
Always validate and sanitize user input before sending to LLMs
function sanitizeInput(userInput: string): string {
// Remove potential injection attempts
// Limit input length
// Filter sensitive data
return userInput.trim().slice(0, 1000);
}
const response = await client.chat(sanitizeInput(userInput));
Considerations:
- Implement rate limiting to prevent abuse
- Monitor for unusual patterns or potential attacks
- Log security-relevant events
- Set maximum token limits to control costs
Data Privacy
Be cautious with sensitive data
- PII (Personally Identifiable Information): Avoid sending PII to LLM providers unless necessary
- Confidential data: Consider data residency and privacy requirements
- Conversation history: Be aware that history is stored and may be persisted
- Compliance: Ensure usage complies with GDPR, HIPAA, or other regulations
// For sensitive applications, avoid storing history
const client = new ChatClient({
provider: 'openai',
model: 'gpt-4'
// No conversationId = no history stored
});
Dependencies
Keep dependencies up to date
# Check for vulnerabilities
npm audit
# Update dependencies
npm update
# Check for outdated packages
npm outdated
Monitoring:
- Enable GitHub Dependabot alerts
- Regularly review and update dependencies
- Subscribe to security advisories for dependencies
Rate Limiting and Costs
Implement proper rate limiting to prevent abuse
const client = new ChatClient({
provider: 'openai',
model: 'gpt-4',
rateLimitConfig: {
requestsPerMinute: 60,
tokensPerMinute: 90000
}
});
Recommendations:
- Set up budget alerts with your LLM provider
- Monitor usage and costs regularly
- Implement user-level rate limiting
- Set maximum token limits per request
Error Handling
Don’t expose sensitive information in errors
❌ Bad:
try {
const response = await client.chat(message);
} catch (error) {
// Don't expose API keys, internal details
console.log(error); // May contain sensitive data
}
✅ Good:
try {
const response = await client.chat(message);
} catch (error) {
logger.error('Chat request failed', {
// Log sanitized error information
provider: client.provider,
timestamp: Date.now()
});
// Return generic error to user
throw new Error('Service temporarily unavailable');
}
Security Features
Current Security Features
- ✅ API Key Resolution: Secure environment variable resolution
- ✅ Rate Limiting: Token bucket algorithm prevents abuse
- ✅ Retry Logic: Exponential backoff prevents thundering herd
- ✅ Error Handling: Structured errors without sensitive data exposure
- ✅ TypeScript: Type safety reduces runtime errors
Planned Security Enhancements
Known Security Considerations
Third-Party Dependencies
meloqui relies on:
- Vercel AI SDK - For LLM provider integration
- Provider SDKs - OpenAI, Anthropic, Google official SDKs
Actions:
- We monitor these dependencies for security updates
- See THIRD-PARTY-NOTICES.md for full dependency list
- Run
npm audit regularly to check for vulnerabilities
Data Transmission
All data is transmitted to LLM providers
- Messages are sent to third-party LLM providers (OpenAI, Anthropic, Google)
- Review each provider’s privacy policy and data handling practices
- Consider data residency requirements for your use case
- Understand that providers may use data for model improvement (check provider policies)
Conversation Storage
When using FileStorage or conversation history:
- Conversations are stored on disk by default
- Ensure proper file permissions on storage directories
- Consider encrypting stored conversations for sensitive data
- Implement data retention policies
Compliance
Regulatory Considerations
When using meloqui, consider:
- GDPR - EU data protection regulations
- CCPA - California consumer privacy
- HIPAA - Healthcare data (meloqui is not HIPAA-compliant by default)
- SOC 2 - Security and availability controls
- ISO 27001 - Information security standards
Note: meloqui is a library. Compliance is the responsibility of the application using it.
Security Checklist for Production
Before deploying meloqui in production:
Updates and Communication
How we communicate security issues:
- GitHub Security Advisories - For disclosed vulnerabilities
- Release Notes - Security fixes noted in CHANGELOG.md
- Email - Direct communication for critical issues (if contact provided)
Stay informed:
- Watch this repository for security updates
- Subscribe to release notifications
- Check CHANGELOG.md for security-related updates
For security-related questions or concerns:
- Security vulnerabilities: Use GitHub Security Advisories for private reporting
- General security questions: Open an issue with the
security label
- Email contact: For urgent matters, contact the repository owner through their GitHub profile
Last Updated: 2026-01-01