Documentation Index
Fetch the complete documentation index at: https://docs.moodmnky.com/llms.txt
Use this file to discover all available pages before exploring further.
Production Environment
This guide covers working with MOOD MNKY API services in the production environment.
Production Servers
| Service | URL | Purpose |
|---|
| Ollama | https://ollama.moodmnky.com | AI model management and inference |
| Flowise | https://flowise.moodmnky.com | Visual workflow automation |
| Langchain | https://langchain.moodmnky.com | Chain-based AI operations |
| n8n | https://mnky-mind-n8n.moodmnky.com | Workflow automation and integration |
Authentication
Production API Keys
-
Obtaining Production Keys
- Log in to Developer Portal
- Navigate to API Keys section
- Request production key access
- Complete verification process
-
Key Format
prod_xxxxxxxxxxxxxxxxxxxx
-
Key Permissions
- Model management
- Workflow creation
- Chain execution
- Analytics access
Rate Limits
| Service | Basic Tier | Standard Tier | Premium Tier | Enterprise |
|---|
| Ollama | 100/hour | 1,000/hour | 10,000/hour | Custom |
| Flowise | 100/hour | 1,000/hour | 10,000/hour | Custom |
| Langchain | 50/hour | 500/hour | 5,000/hour | Custom |
| n8n | 100/hour | 1,000/hour | 10,000/hour | Custom |
Production Features
High Availability
- Load balanced endpoints
- Automatic failover
- Geographic distribution
- 99.9% uptime SLA
Security
-
SSL/TLS Encryption
- All endpoints use HTTPS
- TLS 1.3 supported
- Regular certificate rotation
-
Access Control
- IP whitelisting available
- Role-based access control
- Audit logging enabled
-
Data Protection
- Encrypted data at rest
- Secure key storage
- Regular security audits
Monitoring
-
Service Health
-
Usage Analytics
interface UsageMetrics {
requests: number;
success_rate: number;
average_latency: number;
error_rate: number;
}
-
Performance Monitoring
- Request latency tracking
- Error rate monitoring
- Resource utilization
- Capacity planning
Integration Examples
SDK Integration
import { MoodMnkyClient } from '@moodmnky/sdk';
const client = new MoodMnkyClient({
environment: 'production',
apiKey: 'prod_your_api_key',
options: {
timeout: 30000,
retries: 3,
backoff: {
initial: 1000,
max: 10000,
factor: 2
}
}
});
// Error handling
try {
const result = await client.ollama.generate({
prompt: 'Hello, world!'
});
} catch (error) {
if (error.status === 429) {
// Handle rate limit
await sleep(error.retryAfter * 1000);
} else {
// Handle other errors
console.error('API Error:', error);
}
}
HTTP Requests
# Ollama API
curl -X POST "https://ollama.moodmnky.com/api/generate" \
-H "x-api-key: prod_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "llama2",
"prompt": "Hello, world!"
}'
# Flowise API
curl -X POST "https://flowise.moodmnky.com/api/v1/prediction/flow_id" \
-H "x-api-key: prod_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"question": "How can I help?"
}'
# Langchain API
curl -X POST "https://langchain.moodmnky.com/api/v1/chains/execute" \
-H "x-api-key: prod_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"chain_id": "chain_xyz",
"input": {"query": "What is AI?"}
}'
# n8n API
curl -X POST "https://mnky-mind-n8n.moodmnky.com/api/v1/workflows/trigger" \
-H "x-api-key: prod_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"workflow_id": "workflow_abc",
"data": {"key": "value"}
}'
Best Practices
Error Handling
- Implement Retry Logic
class RetryHandler {
constructor(private maxRetries: number = 3) {}
async execute<T>(operation: () => Promise<T>): Promise<T> {
let lastError;
for (let i = 0; i < this.maxRetries; i++) {
try {
return await operation();
} catch (error) {
lastError = error;
if (!this.isRetryable(error)) throw error;
await this.wait(this.getDelay(i));
}
}
throw lastError;
}
private isRetryable(error: any): boolean {
return error.status === 429 || error.status >= 500;
}
private getDelay(attempt: number): number {
return Math.min(1000 * Math.pow(2, attempt), 10000);
}
private wait(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
- Rate Limit Handling
class RateLimitHandler {
private limits: Map<string, number> = new Map();
updateLimits(response: Response): void {
const remaining = parseInt(response.headers.get('x-ratelimit-remaining') || '0');
const reset = parseInt(response.headers.get('x-ratelimit-reset') || '0');
this.limits.set('remaining', remaining);
this.limits.set('reset', reset);
}
async waitIfNeeded(): Promise<void> {
const remaining = this.limits.get('remaining');
const reset = this.limits.get('reset');
if (remaining === 0 && reset) {
const now = Math.floor(Date.now() / 1000);
const waitTime = Math.max(0, reset - now);
await new Promise(resolve => setTimeout(resolve, waitTime * 1000));
}
}
}
- Request Batching
class RequestBatcher<T> {
private queue: T[] = [];
private processing = false;
async add(item: T): Promise<void> {
this.queue.push(item);
if (this.queue.length >= 10 && !this.processing) {
await this.processQueue();
}
}
private async processQueue(): Promise<void> {
this.processing = true;
while (this.queue.length > 0) {
const batch = this.queue.splice(0, 10);
await this.processBatch(batch);
}
this.processing = false;
}
private async processBatch(items: T[]): Promise<void> {
// Implementation
}
}
- Caching Strategy
class APICache {
private cache = new Map<string, {
value: any;
expires: number;
}>();
set(key: string, value: any, ttl: number = 3600): void {
this.cache.set(key, {
value,
expires: Date.now() + ttl * 1000
});
}
get(key: string): any {
const item = this.cache.get(key);
if (!item) return null;
if (Date.now() > item.expires) {
this.cache.delete(key);
return null;
}
return item.value;
}
}
Support & Resources
Documentation
Support Channels