Mem0 provides a smart, self-improving memory layer for Large Language Models, enabling personalized AI experiences across applications.
pip install mem0ai
from mem0 import Memory
# Initialize Mem0
m = Memory()
# Store a memory from any unstructured text
result = m.add("I am working on improving my tennis skills. Suggest some online courses.", user_id="alice", metadata={"category": "hobbies"})
print(result)
# Created memory: Improving her tennis skills. Looking for online suggestions.
# Retrieve memories
all_memories = m.get_all()
print(all_memories)
# Search memories
related_memories = m.search(query="What are Alice's hobbies?", user_id="alice")
print(related_memories)
# Update a memory
result = m.update(memory_id="m1", data="Likes to play tennis on weekends")
print(result)
# Get memory history
history = m.history(memory_id="m1")
print(history)
- Multi-Level Memory: User, Session, and AI Agent memory retention
- Adaptive Personalization: Continuous improvement based on interactions
- Developer-Friendly API: Simple integration into various applications
- Cross-Platform Consistency: Uniform behavior across devices
- Managed Service: Hassle-free hosted solution
For detailed usage instructions and API reference, visit our documentation at docs.mem0.ai.
For production environments, you can use Qdrant as a vector store:
from mem0 import Memory
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"host": "localhost",
"port": 6333,
}
},
}
m = Memory.from_config(config)
- Integration with various LLM providers
- Support for LLM frameworks
- Integration with AI Agents frameworks
- Customizable memory creation/update rules
- Hosted platform support
Join our Slack or Discord community for support and discussions. If you have any questions, feel free to reach out to us using one of the following methods: