An experimental open reimplementation of LangMem with Claude 3's new Function Calling, and MongoDB Atlas Vector Search, done for the Memory Hackathon
Every feature faithfully reimplemented fitting the original messages and schema of https://langchain-ai.github.io/long-term-memory/quick_start/, done to understand/explain how they work and to see if Claude's function calling and MongoDB's vector storage can fully substitute for OpenAI structured outputs.
Features covered:
- ✅ LangMem's 4 core memory types
- ✅ User State: extracts entities into a specified schema.
- ✅ User Append State: extracts Core Beliefs and Formative Events in a user's life
- ✅ User Semantic Memory: execute user reflection analysis and scores based on recency, importance and relevance.
- ✅ Thread Summary: summarizes conversation into a specified schema
- LangMem retrieval APIs with MongoDB Atlas/local
mongod
-
add_messages
-
list_messages
-
query_user_memory
-
trigger_all_for_thread
or user -> runs 4 core memories -
memory_function
CRUDL abstractions of core memory
-
Todo (aka "too boring to do in a hackathon"):
- make async
- pluggable function calling (Fireworks, Mistral, etc)
- pluggable persistence (Pinecone, Chroma, etc)
- pluggable triggers (
:)
)
jupyter notebook # which -a jupyter in case multiple instances
# install anthropic sdk and pymongo as needed
# have mongodb installed
# run mongod or atlas vector setup
# mongod --dbpath mongodb <-- run it in a path
# atlas deployments setup --type local # grab the connection string
- https://pymongo.readthedocs.io/en/stable/tutorial.html
-
brew install mongodb-atlas
-
atlas deployments setup --type local