Skip to main content

Flowise Service API

The MOOD MNKY Flowise service provides advanced chatflow management and AI workflow orchestration capabilities. This documentation covers all available endpoints, integration patterns, and best practices.

Base URL

https://mnky-mind-flowise.moodmnky.com

Available Endpoints

Chatflow Management

Prediction & Execution

Analytics & Telemetry

Service Health

Authentication

All requests to the Flowise service require authentication:
Authorization: Bearer your-api-key
To obtain API credentials:
  1. Access the developer portal
  2. Create a new API key with required permissions
  3. Follow our security guidelines for key management

Request Format

All requests should use JSON:
Content-Type: application/json

Response Format

Successful responses follow this format:
{
  "success": true,
  "data": {
    // Response data
  }
}
Error responses:
{
  "success": false,
  "error": {
    "code": "error_code",
    "message": "Error description"
  }
}

Streaming Responses

For streaming endpoints, use socket.io-client:
import socketIOClient from 'socket.io-client'

const socket = socketIOClient("https://mnky-mind-flowise.moodmnky.com")

socket.on('connect', () => {
  console.log('Connected:', socket.id)
})

socket.on('start', () => {
  console.log('Stream started')
})

socket.on('token', (token) => {
  console.log('Token:', token)
})

socket.on('sourceDocuments', (docs) => {
  console.log('Source documents:', docs)
})

socket.on('end', () => {
  console.log('Stream ended')
})

Rate Limiting

The service implements tiered rate limits:
TierChat RequestsManagement Requests
Basic60/min300/min
Pro300/min1000/min
EnterpriseCustomCustom
Rate limit headers:
  • X-RateLimit-Limit
  • X-RateLimit-Remaining
  • X-RateLimit-Reset

Analytics Integration

The service supports multiple analytics providers:
  • Langsmith
  • Langfuse
  • LLMonitor
Configure analytics in your requests:
{
  "question": "Hello!",
  "overrideConfig": {
    "analytics": {
      "langFuse": {
        "userId": "user123"
      }
    }
  }
}

Monitoring

Access Prometheus metrics at:
https://mnky-mind-flowise.moodmnky.com/metrics
Available metrics:
  • Request latencies
  • Chatflow execution stats
  • Memory usage
  • Error rates
  • Custom business metrics

Best Practices

  1. Chatflow Management
    • Version control your chatflows
    • Test in development environment
    • Use descriptive chatflow names
    • Document node configurations
  2. Production Usage
    • Implement proper error handling
    • Use streaming for long responses
    • Monitor response times
    • Set appropriate timeouts
  3. Analytics
    • Track user interactions
    • Monitor completion rates
    • Analyze error patterns
    • Measure response quality

Examples

Curl

# Create a chat prediction
curl -X POST https://mnky-mind-flowise.moodmnky.com/api/v1/prediction/{chatflowid} \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "question": "Hello!",
    "overrideConfig": {
      "analytics": {
        "langFuse": {
          "userId": "user123"
        }
      }
    }
  }'

Python

import requests

API_KEY = "your-api-key"
BASE_URL = "https://mnky-mind-flowise.moodmnky.com"

def chat_prediction(chatflow_id, question):
    response = requests.post(
        f"{BASE_URL}/api/v1/prediction/{chatflow_id}",
        headers={
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json"
        },
        json={
            "question": question
        }
    )
    return response.json()

JavaScript

const API_KEY = 'your-api-key';
const BASE_URL = 'https://mnky-mind-flowise.moodmnky.com';

async function chatPrediction(chatflowId, question) {
  const response = await fetch(
    `${BASE_URL}/api/v1/prediction/${chatflowId}`,
    {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${API_KEY}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        question
      })
    }
  );
  return await response.json();
}

Support