Flowise Service API
The MOOD MNKY Flowise service provides advanced chatflow management and AI workflow orchestration capabilities. This documentation covers all available endpoints, integration patterns, and best practices.Base URL
Available Endpoints
Chatflow Management
Prediction & Execution
Analytics & Telemetry
Service Health
Authentication
All requests to the Flowise service require authentication:- Access the developer portal
- Create a new API key with required permissions
- Follow our security guidelines for key management
Request Format
All requests should use JSON:Response Format
Successful responses follow this format:Streaming Responses
For streaming endpoints, use socket.io-client:Rate Limiting
The service implements tiered rate limits:| Tier | Chat Requests | Management Requests |
|---|---|---|
| Basic | 60/min | 300/min |
| Pro | 300/min | 1000/min |
| Enterprise | Custom | Custom |
X-RateLimit-LimitX-RateLimit-RemainingX-RateLimit-Reset
Analytics Integration
The service supports multiple analytics providers:- Langsmith
- Langfuse
- LLMonitor
Monitoring
Access Prometheus metrics at:- Request latencies
- Chatflow execution stats
- Memory usage
- Error rates
- Custom business metrics
Best Practices
-
Chatflow Management
- Version control your chatflows
- Test in development environment
- Use descriptive chatflow names
- Document node configurations
-
Production Usage
- Implement proper error handling
- Use streaming for long responses
- Monitor response times
- Set appropriate timeouts
-
Analytics
- Track user interactions
- Monitor completion rates
- Analyze error patterns
- Measure response quality