- traceloop-sdk: default value for metrics endpoint (#711)
- instrumentation deps without the SDK (#707)
- langchain: support custom models (#706)
- openai: enrich assistant data if not available (#705)
- openai: support pre-created assistants (#701)
- openai: assistants API (#673)
- pinecone: instrument pinecone query embeddings (#368)
- traceloop-sdk: custom span processor's on_start is honored (#695)
- openai: do not import tiktoken if not used
- sdk: exclude api.traceloop.com from requests
- openai: Support report token usage in stream mode (#661)
- anthropic: support messages API (#671)
- auto-instrumentation support (#662)
- sample: poetry issues; litellm sample
- sdk: better logging for otel metrics
- sdk: error for manually providing instrumentation list
- support python 3.12 (#639)
- traceloop-sdk: Log error message when providing wrong API key. (#638)
- openai: support tool syntax (#630)
- sdk: protect against unserializable inputs/outputs (#626)
- watsonx instrumentation: Watsonx metric support (#593)
- instrumentations: add entry points to support auto-instrumentation (#592)
- llamaindex: backport to support v0.9.x (#590)
- openai: is_streaming attribute (#589)
- openai: span events on completion chunks in streaming (#586)
- openai: streaming metrics (#585)
- watsonx: Watsonx stream generate support (#552)
- watsonx instrumentation: Init OTEL_EXPORTER_OTLP_INSECURE before import watsonx models (#549)
- link back to repo in pyproject.toml (#548)
- basic Support for OpenTelemetry Metrics and Token Usage Metrics in OpenAI V1 (#369)
- weaviate: implement weaviate instrumentation (#394)
- watsonx: exclude http request, adding span for model initialization (#543)
- llamaindex: instrument agents & tools (#533)
- openai: Fix
with_raw_response
redirect crashing span (#536) - openai: track client attributes for v1 SDK of OpenAI (#522)
- sdk: replaced MySQL instrumentor with SQLAlchemy (#531)
- sdk: fail gracefully if input/output is not json serializable (#525)
- new PR template (#524)
- cohere: enrich rerank attributes (#476)
- llamaindex: support query pipeline (#475)
- Qdrant instrumentation (#364)
- langchain: support LCEL (#473)
- sdk: fail gracefully in case input/output serialization failure (#472)
- llamaindex: support both new and legacy llama_index versions (#422)
- sdk: url for getting API key (#424)
- openai: handle async streaming responses for openai v1 client (#421)
- support both new and legacy llama_index versions (#420)
- sdk: support input/output of tasks & workflows (#419)
- langchain: backport to 0.0.346 (#418)
- openai: handle OpenAI async completion streaming responses (#409)
- README
- re-enabled haystack instrumentation (#77)
resource_attributes
always being None (#359)
- watsonx support for traceloop (#341)
- sdk: support arbitrary resources (#338)
- bug in managed prompts (#337)
- support langchain v0.1 (#320)
- otel deps (#336)
- openai: instrument embeddings APIs (#335)
- google-vertexai-instrumentation (#289)
- version bump error with replicate (#318)
- version bump error with replicate (#318)
- replicate release (#316)
- semconv: added top-k (#291)
- support anthropic v0.8.1 (#301)
- ci: fix replicate release (#285)
- replicate support (#248)
- support pydantic v1 (#282)
- broken tests (#281)
- sdk: user feedback scores (#247)
- openai: async streaming instrumentation (#245)
- send SDK version on fetch requests (#239)
- support async workflows in llama-index and openai (#233)
- sdk: support vision api for prompt management (#234)
- openai: langchain streaming bug (#225)
- traceloop-sdk: support explicit prompt versioning in prompt management (#221)
- bedrock support (#218)
- lint issues
- openai: attributes for functions in request (#211)
- llama-index: support ollama completion (#212)
- sdk: flag for dashboard auto-creation (#210)
- new logo
- python 3.8 compatibility (#198)
- cohere: cohere chat token usage (#196)
- disable telemetry in tests (#171)
- sdk telemetry data (#168)
- make auto-create path persisted (#170)
- openai: yield chunks for streaming (#166)
- llamaindex auto instrumentation (#157)
- openai: new OpenAI API v1 (#154)
- sdk: max_tokens are now optional from the backend (#153)
- errors on logging openai streaming completion calls (#144)
- langchain: improved support for agents and tools with Langchain (#143)
- support streaming API for OpenAI (#142)
- prompt-registry: remove redundant variables print
- tracing: add missing prompt manager template variables to span attributes (#140)
- sdk: allow overriding processor & propagator (#139)
- proper propagation of api key to fetcher (#138)
- ci-cd: release workflow fetches the outdated commit on release package jobs
- disable syncing when no API key is defined (#135)
- ci-cd: finalize release flow (#133)
- ci-cd: fix release workflow publish step
- ci-cd: fix release workflow publish step
- ci-cd: fix release workflow publish step
- ci-cd: add release workflow (#132)
- release workflow credentials
- disable content tracing for privacy reasons (#118)
- add prompt version hash (#119)
- propagate prompt management attributes to llm spans (#109)
- support association IDs as objects (#111)
- hugging-face transformers pipeline instrumentation (#104)
- add chromadb instrumentation + fix langchain instrumentation (#103)
- export to Grafana tempo (#95)
- langchain instrumentation (#88)
- cohere: support for chat and rerank (#84)
- cohere instrumentation (#82)
- Anthropic instrumentation (#71)
- basic prompt management (#69)
- Pinecone Instrumentation (#3)
- basic testing framework (#70)
- haystack instrumentations (#55)
- auto-create link to traceloop dashboard
- setting headers for exporting traces
- sdk code + openai instrumentation (#4)
- sdk: disable sync when using external exporter
- disable content tracing when not overridden (#121)
- langchain: add retrieval_qa workflow span (#112)
- traceloop-sdk: logging of service name in traces (#99)
- do not trigger dashboard auto-creation if exporter is set (#96)
- docs: clarification on getting API key
- chore: spaces and nits on README
- docs: bad link for python SDK
- docs: updated TRACELOOP_BASE_URL (#81)
- add openai function call data to telemetry (#80)
- sdk: disabled prompt registry by default (#78)
- support pinecone non-grpc (#76)
- support python 3.12
- docs: upgrades; docs about prompt mgmt (#74)
- traceloop-sdk: missing lockfile (#72)
- traceloop-sdk: flushing in notebooks (#66)
- py security issue
- docs: update exporting.mdx to include nr instrumentation (#12)
- sdk: async decorators not awaited
- sdk: missing dependency
- warn if Traceloop wasn't initialized properly (#11)
- match new dashboard API
- traceloop-sdk: duplicate spans reporting (#10)
- moved api key to /tmp
- /v1/traces is always appended to endpoint
- parse headers correctly
- traceloop-sdk: replace context variables with otel context + refactor (#8)
- traceloop sdk initialization and initial versions release for instrumentations (#7)
- wrong imports and missing code components (#6)
- gitignore
- README