Burr makes it easy to develop applications that make decisions (chatbots, agents, simulations, etc...) from simple python building blocks.
Burr works well for any application that uses LLMs, and can integrate with any of your favorite frameworks. Burr includes a UI that can track/monitor/trace your system in real time, along with pluggable persisters (e.g. for memory) to save & load application state.
Link to documentation. Quick (<3min) video intro here. Longer video intro & walkthrough. Blog post here. Join discord for help/questions here.
Install from pypi
:
pip install "burr[start]"
(see the docs if you're using poetry)
Then run the UI server:
burr
This will open up Burr's telemetry UI. It comes loaded with some default data so you can click around.
It also has a demo chat application to help demonstrate what the UI captures enabling you too see things changing in
real-time. Hit the "Demos" side bar on the left and select chatbot
. To chat it requires the OPENAI_API_KEY
environment variable to be set, but you can still see how it works if you don't have an API key set.
Next, start coding / running examples:
git clone https://github.com/dagworks-inc/burr && cd burr/examples/hello-world-counter
python application.py
You'll see the counter example running in the terminal, along with the trace being tracked in the UI. See if you can find it.
For more details see the getting started guide.
With Burr you express your application as a state machine (i.e. a graph/flowchart). You can (and should!) use it for anything in which you have to manage state, track complex decisions, add human feedback, or dictate an idempotent, self-persisting workflow.
The core API is simple -- the Burr hello-world looks like this (plug in your own LLM, or copy from the docs for gpt-X)
from burr.core import action, State, ApplicationBuilder
@action(reads=[], writes=["prompt", "chat_history"])
def human_input(state: State, prompt: str) -> State:
# your code -- write what you want here!
return state.update(prompt=prompt).append(chat_history=chat_item)
@action(reads=["chat_history"], writes=["response", "chat_history"])
def ai_response(state: State) -> State:
response = _query_llm(state["chat_history"]) # Burr doesn't care how you use LLMs!
return state.update(response=content).append(chat_history=chat_item)
app = (
ApplicationBuilder()
.with_actions(human_input, ai_response)
.with_transitions(
("human_input", "ai_response"),
("ai_response", "human_input")
).with_state(chat_history=[])
.with_entrypoint("human_input")
.build()
)
*_, state = app.run(halt_after=["ai_response"], inputs={"prompt": "Who was Aaron Burr, sir?"})
print("answer:", app.state["response"])
Burr includes:
- A (dependency-free) low-abstraction python library that enables you to build and manage state machines with simple python functions
- A UI you can use view execution telemetry for introspection and debugging
- A set of integrations to make it easier to persist state, connect to telemetry, and integrate with other systems
Burr can be used to power a variety of applications, including:
- A simple gpt-like chatbot
- A stateful RAG-based chatbot
- An LLM-based adventure game
- An interactive assistant for writing emails
As well as a variety of (non-LLM) use-cases, including a time-series forecasting simulation, and hyperparameter tuning.
And a lot more!
Using hooks and other integrations you can (a) integrate with any of your favorite vendors (LLM observability, storage, etc...), and (b) build custom actions that delegate to your favorite libraries (like Hamilton).
Burr will not tell you how to build your models, how to query APIs, or how to manage your data. It will help you tie all these together in a way that scales with your needs and makes following the logic of your system easy. Burr comes out of the box with a host of integrations including tooling to build a UI in streamlit and watch your state machine execute.
See the documentation for getting started, and follow the example. Then read through some of the concepts and write your own application!
While Burr is attempting something (somewhat) unique, there are a variety of tools that occupy similar spaces:
Criteria | Burr | Langgraph | temporal | Langchain | Superagent | Hamilton |
---|---|---|---|---|---|---|
Explicitly models a state machine | β | β | β | β | β | β |
Framework-agnostic | β | β | β | β | β | β |
Asynchronous event-based orchestration | β | β | β | β | β | β |
Built for core web-service logic | β | β | β | β | β | β |
Open-source user-interface for monitoring/tracing | β | β | β | β | β | β |
Works with non-LLM use-cases | β | β | β | β | β | β |
Burr is named after Aaron Burr, founding father, third VP of the United States, and murderer/arch-nemesis of Alexander Hamilton. What's the connection with Hamilton? This is DAGWorks' second open-source library release after the Hamilton library We imagine a world in which Burr and Hamilton lived in harmony and saw through their differences to better the union. We originally built Burr as a harness to handle state between executions of Hamilton DAGs (because DAGs don't have cycles), but realized that it has a wide array of applications and decided to release it more broadly.
While Burr is stable and well-tested, we have quite a few tools/features on our roadmap!
- Recursive state machines. Run Burr within Burr to get hierarchical agents/parallelism + track through to the UI.
- Testing & eval curation. Curating data with annotations and being able to export these annotations to create unit & integration tests.
- Various efficiency/usability improvements for the core library (see planned capabilities for more details). This includes:
- Fully typed state with validation
- First-class support for retries + exception management
- More integration with popular frameworks (LCEL, LLamaIndex, Hamilton, etc...)
- Capturing & surfacing extra metadata, e.g. annotations for particular point in time, that you can then pull out for fine-tuning, etc.
- Tooling for hosted execution of state machines, integrating with your infrastructure (Ray, modal, FastAPI + EC2, etc...)
- Storage integrations. More integrations with technologies like Redis, MongoDB, MySQL, etc. so you can run Burr on top of what you have available.
- More out of the box plugins for fine-grained tracing, e.g. decorators for your functions, LLM clients, etc.
If you want to avoid self-hosting the above solutions we're building Burr Cloud. To let us know you're interested sign up here for the waitlist to get access.
We welcome contributors! To get started on developing, see the developer-facing docs.
Users who have contributed core functionality, integrations, or examples.
Users who have contributed small docs fixes, design suggestions, and found bugs