All notable changes to this project will be documented in this file. See standard-version for commit guidelines.
- add anthropic claude llms (1e09f67)
- add api rate-limiter support (e4a8863)
- add azure openai (4478ed1)
- add built-in function for embeddings (37dfa99)
- add busines information extraction prompt (cb2dca5)
- add caching proxy (ba57bf9)
- add caching proxy (335dcbd)
- add google gemini safety controls (7783bbe)
- add google palm models (cf6bf11)
- add gpt-4 support (405b87e)
- add ollama (c1181e0)
- add semantic router (2096158)
- add streaming support for gemini and cohere (a946d52)
- add streaming support to proxy (5edd33c)
- add support for embeddings to use with vector search (4bd4ba4)
- add support for json type and other fixes (3249746)
- add together compute llm api support (37fc9cf)
- add vector db and embeddings example (141ad9f)
- added a new llm alephalpha (bff5f51)
- added agent tracing (4bc9ae7)
- added claude3, gemini and fixed openai azure (1c902d1)
- added dsp (fc9c292)
- added in memory vector db with serialize to disk (5036933)
- added mistral support (67747a0)
- added more tests (fe33451)
- added multi-modal support to anthropic api and other fixes (95a0680)
- added openai chat-gpt api support (eb4b151)
- added prefix Ax across type namespace (3a9daf0)
- added proxy bin (65adabe)
- added the dsp style bootstrap few stop optimizer (eab69c8)
- agent framework, agents can use other agents (93cbfb3)
- agent framework, agents can use other agents (a7420e7)
- ai balancer to pick the cheapest llm with automatic fallback (a8f7b7b)
- auto-fix json syntax errors (dc27812)
- automatic long term memory (e94ffd5)
- automatic vectordb retrieval augmented generation in proxy (d081c18)
- aws bedrock support (acdc89b)
- big [breaking] refactor (c97395d)
- convert any document format to text and a full RAG engine builtin (7f3f28c)
- dbmanager handles chunking, embedding and search for text (5c125f8)
- deepseek and cohere function calling (5341700)
- enable remote logging with LLMC_APIKEY env var (e04b257)
- gpt-4o added (a388219)
- huggingface data loader (47c6c0e)
- improve trace logging (12776be)
- improved debug logs (38e869e)
- include reasoning in tracing (e11f665)
- iterate to completion if max token length reached (f9d0f50)
- JS code interpreter function (b3de309)
- library is now down to 1 single dependency (efaaa03)
- llm converts meeting notes to trello tasks example (633cb95)
- lots of api improvements and many bug fixes (036cd06)
- major refactor to enable traceing and usage tracking (a7c980c)
- make it easier to customize prompts (1870ee7)
- migrated from commonjs to ES2022 (c8ad44b)
- multi-modal dsp, use images in input fields (a170b55)
- new agent framework (1e7040c)
- new agent prompt (6c4df2c)
- new llm proxy for tracing usage (ab36530)
- new llmclient command and crawler to embed and vectorize websites (47a61f2)
- new llmclient command and crawler to embed and vectorize websites (ede47c5)
- openapi wispher support (50e864f)
- parsing and processing output fields and functions while streaming (95a7a93)
- proxy support for all llms (1e1edc3)
- req tracing added to ai classes (dad8c3c)
- spider to embed website (56f4598)
- streaming support (65839a9)
- support for google vertex (49ee383)
- support for Hugging Face and other updates (8f64432)
- support for remote tracing (033a67a)
- track token usage (bd4f798)
- true realtime output validation while streaming (4308b99)
- updates to agent framework api (472efbf)
- vector db query rewriting (52fad9c)
- vector db query rewriting (bce6d19)
- vector db support (0ea1c7f)
- welcome llm-client (225ec5a)
- add opentelemetry support and other fixes (685fe80)
- Add Polyfill for TextDecoderStream to Ensure Compatibility with Bun #21 (540348d)
- add tools support to anthropic (1cc96b7)
- added missing typescript docs (2459a28)
- added system prompt to trace (335bb40)
- anthropic header issue (40dbce0)
- anthropic proxy endpoint (#7) (cf7c793)
- anthropic, updated other models (6e83d34)
- Azure OpenAI chat/completion call failed (#19) (#20) (0fad4f9)
- banner fixes (fdf453b)
- blank response with stream (ab7c62d)
- bug in new open ai chat model (654282d)
- bug with exporting enums (4fc0a3b)
- bug with exporting enums (4ca8c5d)
- build error (e4d29c3)
- build fixes (d8d4a47)
- build issue (7733463)
- build issue (071f476)
- build issue with previous version (cfe92fd)
- build issues (dd346c4)
- build issues (33394df)
- change name in package (9697d19)
- cleaup gitignore (6627ff3)
- common llm model config to ensure more deterministic outputs (0dce599)
- examples (973eb51)
- extra comma in package.json (16b36f5)
- fix in proxy (f778c9d)
- fix in proxy (f007d26)
- fix in proxy (ab87118)
- google gemini function calling (a9214f3)
- google gemini now works great (a6d6528)
- import issues (ce87294)
- improve examples (6e656cb)
- improved the customer suppot example (103b072)
- issue with cspell (806cba1)
- issue with rate limiter (0648ad7)
- json5 build issue (fe6de9c)
- major fixes to function calling (d44a373)
- make mistral function calling work (6bd9cf7)
- make stop sequence optional (d18addc)
- migrate npm package to llmclient (97a4a07)
- minor build fix (32f04b3)
- minor fix (62fbf38)
- minor fixes (3792700)
- minor fixes (574f73f)
- minor fixes (5ebf283)
- minor fixes (5bb612d)
- minor fixes (0fa4687)
- minor fixes (47eef31)
- minor fixes (0da9daf)
- minor fixes (15bd72f)
- minor issue with incorrect error (65e7ea8)
- missing imports (9071c9e)
- new ai balancer to route based on token pricing in case of error (7ea79a9)
- openai function tracing fixes (d623f03)
- opentelemetry tracing added to ai and vectordb (1918410)
- prettier config (8f4ca77)
- proxy command (9eaf6e7)
- proxy port can be set using the env var PORT (c08af8d)
- refactored request, response tracing (1ac7023)
- refactored usage reporting and other fixes (84bf661)
- remove comments (73e3a30)
- remove index folder committed by mistake (4d327ab)
- removed crawler (ab4ac6e)
- removed extra packages (faae6d2)
- renamed emailprompt to messageprompt (3063a90)
- renamed responseSchema to resultSchema for clarity (b238b88)
- resolved all typescript strict mode errors and warnings (4f08fac)
- result error-correction fixes (eeaa12e)
- rewrite or error-correction code and other fixes (37b620e)
- seperated embed usage data from completion usage (feb5619)
- set default model for openai to gpt3-turbo (14ba73c)
- signature parser (a149361)
- streamlined the llm apis (0bbc8b0)
- system prompt fixes (b56201c)
- tests breaking (4c047ce)
- ts cleanup (be645d9)
- tuning and optimization fixes (d0b892c)
- type validation logic (88eec1e)
- update model defaults (1c70dd7)
- update model defaults (cc7e82f)
- update model defaults (1251372)
- update package name (ed5346c)
- Updated Cohere reqValue to include input_type (required since v3), Groq model to llama3-70b-8192 (llama2-70b-4096 was depreciated), and added a temporary fix to the marketing.py example (messageGuidelines expects a string, not a string array). (#22) (5885877)
- updates to debugging proxy (6476a23)
- valid healthcheck path for proxy (3adea76)
- various fixes (a0099a8)
- various fixes (7b75342)
- various fixes (440cb84)
- various fixes (516fbcb)
- various fixes (0978b29)
- various proxy fixes (61b693a)
- version update (c4794a3)
- zprompt type spec (db1f7e7)