Documentation Index
Fetch the complete documentation index at: https://docs.moodmnky.com/llms.txt
Use this file to discover all available pages before exploring further.
Memory Systems
Memory in Langchain allows your applications to maintain context across interactions, enabling more natural conversations and coherent responses over time. The Memory API provides various ways to store, retrieve, and manage conversation history and contextual information.Overview
Langchain’s memory systems serve several important purposes:- Maintaining conversation history between users and AI assistants
- Storing key-value information for later retrieval
- Persisting important context across multiple interactions
- Enabling summarization of past interactions
- Supporting personalization based on user history
Memory Types
Langchain offers several memory types to address different use cases:| Memory Type | Description | Best For |
|---|---|---|
conversation | Stores complete conversation history | Chat applications |
buffer | Maintains a simple list of recent messages | Limited context applications |
summary | Keeps a compressed summary of conversation history | Long-running conversations |
token_buffer | Manages conversation history within token limits | Token-sensitive applications |
entity | Tracks and stores entity information | Entity-focused conversations |
vector | Stores embeddings of conversation for semantic retrieval | Knowledge-intensive applications |
combined | Combines multiple memory types | Complex applications with varied needs |
API Reference
Create Memory
Get Memory
List Memories
Get Memory Contents
Add Memory
Clear Memory
Delete Memory
Update Memory Configuration
Implementation Examples
Conversation Memory
Summary Memory
Entity Memory
Combined Memory
Best Practices
Memory Selection
-
Choose the right memory type for your application:
- Use
conversationfor simple chat applications - Use
summaryfor long-running conversations that might exceed context windows - Use
entityfor tracking specific information mentioned by users - Use
token_bufferfor applications with strict token limits - Use
combinedfor complex applications that need multiple memory capabilities
- Use
-
Configure memory size appropriately:
- Balance between context retention and token usage
- Consider the typical conversation length in your application
- For models with smaller context windows, use summary memory to compress history
Memory Usage
-
Connect memory to chains for persistent context across interactions:
-
Clear memory when appropriate:
- At the end of a support session
- When the user explicitly requests to start over
- When the conversation context significantly changes
- When hitting token limits with critical new information
-
Implement memory management strategies:
Performance and Cost Optimization
-
Monitor token usage to manage costs:
- Use token-aware memories like
token_bufferto stay within limits - Implement automatic summarization when conversations get long
- Clear unnecessary memory when it’s no longer needed
- Use token-aware memories like
-
Use memory selectively:
- Not all chains need memory - use it only when context matters
- For stateless operations, avoid memory to reduce overhead
- Consider the cost-benefit of maintaining extensive memory
-
Optimize for latency:
- Pre-fetch memory contents for critical paths
- Use caching where appropriate
- Consider async memory updates for non-critical information
Error Handling
-
Implement robust error handling:
-
Have fallback strategies:
- Continue with empty memory if retrieval fails
- Create a new memory if the original is corrupted
- Use simplified memory types as fallbacks
Security Considerations
-
Never store sensitive information in memories:
- Implement content filtering for PII
- Have clear data retention policies
- Provide users with options to clear their memory
-
Implement proper access controls:
- Ensure memories are accessible only to authorized users
- Use session-based memory access
- Implement timeout policies for inactive memories
Support & Resources
For additional support:- Email: [email protected]
- Discord: MOOD MNKY Developer Community
- GitHub: Issue Tracker