Skip to main content

Chains

Chains in Langchain represent sequences of operations that combine LLMs, tools, memory, and other components into cohesive workflows. They allow you to build complex reasoning systems, agents, and multi-step processes with reusable components.

Overview

Chains connect multiple components together to solve specific tasks. A chain typically consists of:
  • One or more language models (LLMs)
  • Optional tools or APIs the chain can use
  • Memory systems for persistence
  • Specific prompt templates and logic
The Chains API allows you to:
  • Create predefined or custom chains
  • Execute chains with various inputs
  • Manage chain configurations
  • Track chain usage and performance
  • Connect chains to memories and other components

Chain Types

Langchain offers several pre-built chain types to address common use cases:
Chain TypeDescriptionUse Case
llmSimple chain that passes input to an LLMQuestion answering, text generation
sequentialExecutes multiple steps in sequenceMulti-stage processing
routerRoutes inputs to different sub-chainsTask classification and delegation
retrievalRetrieves relevant documents before generating responsesKnowledge base Q&A
conversationMaintains conversation history and contextChat applications
agentUses an LLM to determine which tools to useDynamic problem-solving
summarizationSpecialized for text summarizationContent condensation
extractionExtracts structured data from textParsing information into structured format

API Reference

Create Chain

POST https://langchain.moodmnky.com/api/chains
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
Request body:
{
  "name": "My Custom Chain",
  "description": "A chain for answering product questions",
  "type": "llm",
  "config": {
    "llm": {
      "provider": "openai",
      "model": "gpt-4",
      "temperature": 0.7
    },
    "prompt": "Answer the following question about our products: {question}",
    "outputKey": "answer"
  }
}
Response:
{
  "chainId": "chn_01h9f5zj3q8r6k2y7t1x",
  "name": "My Custom Chain",
  "description": "A chain for answering product questions",
  "type": "llm",
  "created": "2023-10-15T14:30:00Z",
  "lastUsed": "2023-10-15T14:30:00Z",
  "config": {
    "llm": {
      "provider": "openai",
      "model": "gpt-4",
      "temperature": 0.7
    },
    "prompt": "Answer the following question about our products: {question}",
    "outputKey": "answer"
  }
}

Get Chain

GET https://langchain.moodmnky.com/api/chains/{chainId}
Authorization: Bearer YOUR_API_KEY
Response:
{
  "chainId": "chn_01h9f5zj3q8r6k2y7t1x",
  "name": "My Custom Chain",
  "description": "A chain for answering product questions",
  "type": "llm",
  "created": "2023-10-15T14:30:00Z",
  "lastUsed": "2023-10-15T14:45:12Z",
  "config": {
    "llm": {
      "provider": "openai",
      "model": "gpt-4",
      "temperature": 0.7
    },
    "prompt": "Answer the following question about our products: {question}",
    "outputKey": "answer"
  }
}

List Chains

GET https://langchain.moodmnky.com/api/chains
Authorization: Bearer YOUR_API_KEY
Response:
{
  "chains": [
    {
      "chainId": "chn_01h9f5zj3q8r6k2y7t1x",
      "name": "My Custom Chain",
      "description": "A chain for answering product questions",
      "type": "llm",
      "created": "2023-10-15T14:30:00Z",
      "lastUsed": "2023-10-15T14:45:12Z"
    },
    {
      "chainId": "chn_02h9g6ak4r9s7l3z8u2y",
      "name": "Customer Support Chain",
      "description": "Handles customer support inquiries",
      "type": "conversation",
      "created": "2023-10-14T09:15:23Z",
      "lastUsed": "2023-10-15T11:22:45Z"
    }
  ],
  "pagination": {
    "total": 12,
    "limit": 10,
    "offset": 0,
    "nextOffset": 10
  }
}

Run Chain

POST https://langchain.moodmnky.com/api/chains/{chainId}/run
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
Request body:
{
  "inputs": {
    "question": "What sizes does the premium t-shirt come in?"
  },
  "stream": false
}
Response:
{
  "output": {
    "answer": "Our premium t-shirt comes in five sizes: XS, S, M, L, and XL. Each size has slightly different measurements, which you can find in our size guide on the product page."
  },
  "executionMetadata": {
    "startTime": "2023-10-15T14:47:32Z",
    "endTime": "2023-10-15T14:47:34Z",
    "totalTokens": 128,
    "inputTokens": 32,
    "outputTokens": 96,
    "cost": 0.00256
  }
}

Update Chain

PATCH https://langchain.moodmnky.com/api/chains/{chainId}
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
Request body:
{
  "name": "Updated Chain Name",
  "config": {
    "llm": {
      "temperature": 0.5
    }
  }
}
Response:
{
  "chainId": "chn_01h9f5zj3q8r6k2y7t1x",
  "name": "Updated Chain Name",
  "description": "A chain for answering product questions",
  "type": "llm",
  "created": "2023-10-15T14:30:00Z",
  "lastUsed": "2023-10-15T14:45:12Z",
  "config": {
    "llm": {
      "provider": "openai",
      "model": "gpt-4",
      "temperature": 0.5
    },
    "prompt": "Answer the following question about our products: {question}",
    "outputKey": "answer"
  }
}

Delete Chain

DELETE https://langchain.moodmnky.com/api/chains/{chainId}
Authorization: Bearer YOUR_API_KEY
Response:
{
  "success": true,
  "message": "Chain deleted successfully"
}

Connect Memory to Chain

POST https://langchain.moodmnky.com/api/chains/{chainId}/memories
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
Request body:
{
  "memoryId": "mem_04k2l8m6n4p5q3r1s9t",
  "key": "chat_history"
}
Response:
{
  "chainId": "chn_01h9f5zj3q8r6k2y7t1x",
  "memoryId": "mem_04k2l8m6n4p5q3r1s9t",
  "key": "chat_history",
  "connected": "2023-10-15T15:10:22Z"
}

Implementation Examples

Basic LLM Chain

// Creating a simple LLM chain for question answering
async function createBasicLLMChain() {
  const response = await fetch('https://langchain.moodmnky.com/api/chains', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      name: 'Basic Q&A Chain',
      type: 'llm',
      config: {
        llm: {
          provider: 'openai',
          model: 'gpt-3.5-turbo',
          temperature: 0.7
        },
        prompt: 'Answer the following question concisely and accurately: {question}',
        outputKey: 'answer'
      }
    })
  });
  
  return await response.json();
}

// Using the LLM chain
async function askQuestion(chainId, question) {
  const response = await fetch(`https://langchain.moodmnky.com/api/chains/${chainId}/run`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      inputs: {
        question: question
      }
    })
  });
  
  const result = await response.json();
  return result.output.answer;
}

// Example usage
async function example() {
  const chain = await createBasicLLMChain();
  console.log('Created chain:', chain.chainId);
  
  const answer = await askQuestion(chain.chainId, 'What is the capital of France?');
  console.log('Answer:', answer);
}

Retrieval Chain

// Creating a retrieval-based QA chain
async function createRetrievalChain() {
  // First, create a vector store
  const vectorStoreResponse = await fetch('https://langchain.moodmnky.com/api/vectorstores', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      name: 'Product Knowledge Base',
      description: 'Vector store for product documentation',
      embeddingModel: 'openai/text-embedding-ada-002'
    })
  });
  
  const vectorStore = await vectorStoreResponse.json();
  
  // Upload documents to the vector store
  // ... (document upload code omitted for brevity)
  
  // Create the retrieval chain
  const chainResponse = await fetch('https://langchain.moodmnky.com/api/chains', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      name: 'Product QA Chain',
      type: 'retrieval',
      config: {
        llm: {
          provider: 'openai',
          model: 'gpt-4',
          temperature: 0.3
        },
        retriever: {
          vectorStoreId: vectorStore.vectorStoreId,
          k: 3,
          searchType: 'similarity'
        },
        prompt: `Based on the following context, please answer the question.

Context: {context}

Question: {question}`,
        outputKey: 'answer'
      }
    })
  });
  
  return await chainResponse.json();
}

// Using the retrieval chain
async function askProductQuestion(chainId, question) {
  const response = await fetch(`https://langchain.moodmnky.com/api/chains/${chainId}/run`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      inputs: {
        question: question
      }
    })
  });
  
  const result = await response.json();
  return {
    answer: result.output.answer,
    metadata: result.executionMetadata
  };
}

Conversational Chain with Memory

// Creating a conversational chain with memory
async function createConversationalChain() {
  // First, create a conversation memory
  const memoryResponse = await fetch('https://langchain.moodmnky.com/api/memories', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      name: 'Customer Chat Memory',
      type: 'conversation',
      config: {
        maxMessages: 10,
        returnMessages: true
      }
    })
  });
  
  const memory = await memoryResponse.json();
  
  // Create the conversation chain
  const chainResponse = await fetch('https://langchain.moodmnky.com/api/chains', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      name: 'Customer Support Assistant',
      type: 'conversation',
      config: {
        llm: {
          provider: 'openai',
          model: 'gpt-4',
          temperature: 0.7
        },
        prompt: `You are a customer support agent for MOOD MNKY, a company that sells premium self-care products.
Your goal is to be helpful, friendly, and knowledgeable about our products.

Current conversation:
{chat_history}

Customer: {input}
Agent:`,
        outputKey: 'response'
      }
    })
  });
  
  const chain = await chainResponse.json();
  
  // Connect the memory to the chain
  await fetch(`https://langchain.moodmnky.com/api/chains/${chain.chainId}/memories`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      memoryId: memory.memoryId,
      key: 'chat_history'
    })
  });
  
  return {
    chain,
    memory
  };
}

// Chatting with the conversational chain
async function chat(chainId, message) {
  const response = await fetch(`https://langchain.moodmnky.com/api/chains/${chainId}/run`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      inputs: {
        input: message
      }
    })
  });
  
  const result = await response.json();
  return result.output.response;
}

// Example conversation
async function conversationExample() {
  const { chain, memory } = await createConversationalChain();
  console.log('Created conversational chain with memory');
  
  const responses = [];
  
  // First message
  responses.push(await chat(chain.chainId, "Hi! I'm interested in your premium candles. Can you tell me about the different scents available?"));
  
  // Follow-up question
  responses.push(await chat(chain.chainId, "The Lavender Dream sounds nice. How long does it typically last?"));
  
  // Another follow-up
  responses.push(await chat(chain.chainId, "Great! And does it come in different sizes?"));
  
  console.log('Conversation:', responses);
  
  // Check memory contents
  const memoryContents = await fetch(`https://langchain.moodmnky.com/api/memories/${memory.memoryId}/contents`, {
    headers: {
      'Authorization': 'Bearer YOUR_API_KEY'
    }
  }).then(res => res.json());
  
  console.log('Memory contents:', memoryContents);
}

Agent Chain with Tools

// Creating an agent chain with tools
async function createAgentChain() {
  const response = await fetch('https://langchain.moodmnky.com/api/chains', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      name: 'E-commerce Agent',
      type: 'agent',
      config: {
        llm: {
          provider: 'openai',
          model: 'gpt-4',
          temperature: 0.2
        },
        tools: [
          {
            name: 'search_products',
            description: 'Search for products in the catalog',
            parameters: {
              type: 'object',
              properties: {
                query: {
                  type: 'string',
                  description: 'Search query for products'
                },
                category: {
                  type: 'string',
                  description: 'Optional category to filter by'
                }
              },
              required: ['query']
            }
          },
          {
            name: 'get_product_details',
            description: 'Get detailed information about a specific product',
            parameters: {
              type: 'object',
              properties: {
                productId: {
                  type: 'string',
                  description: 'The ID of the product to get details for'
                }
              },
              required: ['productId']
            }
          },
          {
            name: 'check_inventory',
            description: 'Check if a product is in stock',
            parameters: {
              type: 'object',
              properties: {
                productId: {
                  type: 'string',
                  description: 'The ID of the product to check inventory for'
                }
              },
              required: ['productId']
            }
          }
        ],
        agentType: 'react',
        maxIterations: 5,
        outputKey: 'output'
      }
    })
  });
  
  return await response.json();
}

// Using the agent chain
async function askAgent(chainId, question) {
  const response = await fetch(`https://langchain.moodmnky.com/api/chains/${chainId}/run`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Bearer YOUR_API_KEY'
    },
    body: JSON.stringify({
      inputs: {
        input: question
      },
      includeAgentSteps: true  // Return the agent's thinking and tool usage
    })
  });
  
  return await response.json();
}

// Example of using the agent
async function agentExample() {
  const agent = await createAgentChain();
  console.log('Created agent chain:', agent.chainId);
  
  const result = await askAgent(agent.chainId, "I'm looking for a lavender-scented candle that's currently in stock. Can you help me find one?");
  
  console.log('Final answer:', result.output.output);
  console.log('Agent steps:', result.agentSteps);
}

Best Practices

Chain Design

  • Start simple and iterate: Begin with basic chain types before moving to more complex chains
  • Modularize chains: Build smaller, focused chains that can be composed together
  • Choose appropriate chain types: Select the chain type that best matches your use case
  • Balance prompt engineering and chain complexity: Sometimes a better prompt is more effective than a complex chain
  • Plan for chain composition: Design chains that can be connected to create more complex workflows

Performance Optimization

  • Use streaming for responsive user experiences with longer outputs
  • Monitor token usage to manage costs and optimize performance
  • Cache results when appropriate for frequently asked questions
  • Set appropriate temperature values based on the task (lower for factual tasks, higher for creative ones)
  • Optimize retrieval parameters like chunk size and overlap for retrieval chains
  • Set reasonable maximum tokens to prevent unnecessary computation

Error Handling

  • Implement comprehensive error handling:
    async function runChainWithErrorHandling(chainId, inputs) {
      try {
        const response = await fetch(`https://langchain.moodmnky.com/api/chains/${chainId}/run`, {
          method: 'POST',
          headers: {
            'Content-Type': 'application/json',
            'Authorization': 'Bearer YOUR_API_KEY'
          },
          body: JSON.stringify({ inputs })
        });
        
        if (!response.ok) {
          const errorData = await response.json();
          if (response.status === 429) {
            // Handle rate limiting
            console.log('Rate limited. Retrying after delay...');
            await new Promise(resolve => setTimeout(resolve, 1000));
            return runChainWithErrorHandling(chainId, inputs);
          } else if (response.status === 400) {
            // Handle validation errors
            console.error('Validation error:', errorData.message);
            throw new Error(`Validation error: ${errorData.message}`);
          } else {
            // Handle other errors
            console.error('Chain execution error:', errorData);
            throw new Error(`Chain execution failed: ${errorData.message}`);
          }
        }
        
        return await response.json();
      } catch (error) {
        console.error('Error executing chain:', error);
        // Implement appropriate fallback behavior
        return {
          error: true,
          message: error.message,
          fallbackOutput: 'I apologize, but I encountered an error processing your request.'
        };
      }
    }
    
  • Set up monitoring and alerts for chain failures
  • Implement retry logic with exponential backoff for transient errors
  • Have fallback options for critical chains
  • Validate inputs before sending to avoid preventable errors

Security and Compliance

  • Never embed API keys in client-side code
  • Validate and sanitize user inputs before passing to chains
  • Implement rate limiting to prevent abuse
  • Monitor for PII in inputs and outputs
  • Set up audit logging for chain executions in sensitive contexts
  • Consider content filtering for user-facing applications

Support & Resources

For additional support: