-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add traceloop / OpenLLMetry integration docs #51
Conversation
Hi @nirga - Thanks for your contribution to the haystack integrations. Could I ask how this integrates with Haystack? To add this to the Haystack integrations page I think it's best to have a bit more information on how people can use it with Haystack pipelines, what steps they have to take to make monitoring start etc. Any additional info would be helpful! |
Thanks @TuanaCelik for your response. That's the magic of it! We're monkey-patching Haystack internally so devs don't need to do anything and they get visibility in their existing observability platform. |
@nirga it's still not fully clear to me. Here's a colab that you can see me trying to use this: https://colab.research.google.com/drive/1AIfIQBUZsSzeETRipBJtXOx2Wnld2QXJ?usp=sharing Here's some things I've been able to figure our, if it makes sense to you, adding them to the integration page would be super useful 🙏
Thanks in advance for all the info :) |
More experiments :)
by looking at the sourcecode of openllmetry, and then I run the new 'run()' function. But still can't see any dashboard. |
@TuanaCelik excited to update that I ended up building a custom OpenTelemetry instrumentation for Haystack. It's all happening behind the scenes, and I was able to auto-instrument OpenAI with just one line of code (which is really exciting!). So for example this is what I ran: import os
from haystack.nodes import PromptNode, PromptTemplate, AnswerParser
from haystack.pipelines import Pipeline
from traceloop.sdk import Traceloop
Traceloop.init(app_name="haystack_app")
prompt = PromptTemplate(
prompt="Tell me a joke about {query}\n",
output_parser=AnswerParser(),
)
prompt_node = PromptNode(
model_name_or_path="gpt-4",
api_key=os.getenv("OPENAI_API_KEY"),
default_prompt_template=prompt,
)
pipeline = Pipeline()
pipeline.add_node(component=prompt_node, name="PromptNode", inputs=["Query"])
query = "OpenTelemetry"
result = pipeline.run(query)
print(result["answers"][0].answer) And here are the results on our dashboard (have similar results on Honeycomb and Datadog): And here are the results from your colab (you can track the trace from getting the data from weaviate all the way to OpenAI which is nice) Let me know your thoughts and whether it makes sense to merge this. |
@nirga this looks great, definitely makes sense to merge this. I think we could add these screenshots and code examples and info that you provide here to the .md file here too. I'm happy to do that later today and ask for you to have a look though. Excited for the community to see this :) PS with some more info: Haystack is currently going through a major update to 2.0, which will still take some months. So we would likely ping you to make sure the integration still works when that happens or help you with figuring out how to update it. When the time comes... |
Sounds good! 😃 thanks! |
That was a bug specific for OpenTelemetry + Colab (occurred because the spans weren't flushed after running a cell). Fixed now with v0.0.42 of the SDK. |
Not on colab, but I got it to work on my local notebook! Making some suggestions and merging today 🎉 |
No description provided.