Documentation Index
Fetch the complete documentation index at: https://docs.moodmnky.com/llms.txt
Use this file to discover all available pages before exploring further.
Flowise Service API
The MOOD MNKY Flowise service provides advanced chatflow management and AI workflow orchestration capabilities. This documentation covers all available endpoints, integration patterns, and best practices.
Base URL
https://mnky-mind-flowise.moodmnky.com
Available Endpoints
Chatflow Management
Prediction & Execution
Analytics & Telemetry
Service Health
Authentication
All requests to the Flowise service require authentication:
Authorization: Bearer your-api-key
To obtain API credentials:
- Access the developer portal
- Create a new API key with required permissions
- Follow our security guidelines for key management
All requests should use JSON:
Content-Type: application/json
Successful responses follow this format:
{
"success": true,
"data": {
// Response data
}
}
Error responses:
{
"success": false,
"error": {
"code": "error_code",
"message": "Error description"
}
}
Streaming Responses
For streaming endpoints, use socket.io-client:
import socketIOClient from 'socket.io-client'
const socket = socketIOClient("https://mnky-mind-flowise.moodmnky.com")
socket.on('connect', () => {
console.log('Connected:', socket.id)
})
socket.on('start', () => {
console.log('Stream started')
})
socket.on('token', (token) => {
console.log('Token:', token)
})
socket.on('sourceDocuments', (docs) => {
console.log('Source documents:', docs)
})
socket.on('end', () => {
console.log('Stream ended')
})
Rate Limiting
The service implements tiered rate limits:
| Tier | Chat Requests | Management Requests |
|---|
| Basic | 60/min | 300/min |
| Pro | 300/min | 1000/min |
| Enterprise | Custom | Custom |
Rate limit headers:
X-RateLimit-Limit
X-RateLimit-Remaining
X-RateLimit-Reset
Analytics Integration
The service supports multiple analytics providers:
- Langsmith
- Langfuse
- LLMonitor
Configure analytics in your requests:
{
"question": "Hello!",
"overrideConfig": {
"analytics": {
"langFuse": {
"userId": "user123"
}
}
}
}
Monitoring
Access Prometheus metrics at:
https://mnky-mind-flowise.moodmnky.com/metrics
Available metrics:
- Request latencies
- Chatflow execution stats
- Memory usage
- Error rates
- Custom business metrics
Best Practices
-
Chatflow Management
- Version control your chatflows
- Test in development environment
- Use descriptive chatflow names
- Document node configurations
-
Production Usage
- Implement proper error handling
- Use streaming for long responses
- Monitor response times
- Set appropriate timeouts
-
Analytics
- Track user interactions
- Monitor completion rates
- Analyze error patterns
- Measure response quality
Examples
Curl
# Create a chat prediction
curl -X POST https://mnky-mind-flowise.moodmnky.com/api/v1/prediction/{chatflowid} \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"question": "Hello!",
"overrideConfig": {
"analytics": {
"langFuse": {
"userId": "user123"
}
}
}
}'
Python
import requests
API_KEY = "your-api-key"
BASE_URL = "https://mnky-mind-flowise.moodmnky.com"
def chat_prediction(chatflow_id, question):
response = requests.post(
f"{BASE_URL}/api/v1/prediction/{chatflow_id}",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json={
"question": question
}
)
return response.json()
JavaScript
const API_KEY = 'your-api-key';
const BASE_URL = 'https://mnky-mind-flowise.moodmnky.com';
async function chatPrediction(chatflowId, question) {
const response = await fetch(
`${BASE_URL}/api/v1/prediction/${chatflowId}`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
question
})
}
);
return await response.json();
}
Support