Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: implementing openai wrapper #114

Conversation

noble-varghese
Copy link
Contributor

@noble-varghese noble-varghese commented Apr 3, 2024

Problem

Implementing the openAI wrapper for langfuse js

Changes

  • Added the openAI wrapper in the langfuse package which implements tracing out of the box for all the openai chat and completions calls (with/without streaming).
  • Added test cases to support and validate the changes that were made.
  • Added a sample implementation below.
import OpenAI from "openai";
import { OpenAIWrapper } from "langfuse";


const client = OpenAIWrapper(new OpenAI());

const main = async () => {
    const completion = await client.chat.completions.create({
        messages: [{ role: "system", content: "Say this is a test!" }],
        model: "gpt-3.5-turbo",
        stream: true,
        user: "[email protected]",
        max_tokens: 300
    }, {});

    for await (const chunk of completion) {
        console.log(chunk.choices[0]?.delta?.content || "")
    }
    
    await client.flushAsync()
}

main();

Release info Sub-libraries affected

Bump level

  • Major
  • Minor
  • Patch

Libraries affected

  • All of them
  • langfuse
  • langfuse-node

Changelog notes

  • Added support for X

@noble-varghese noble-varghese requested a review from a team as a code owner April 3, 2024 10:37
@noble-varghese noble-varghese marked this pull request as draft April 3, 2024 10:37
noble-varghese and others added 14 commits April 3, 2024 22:40
- Added docs for the wrapper
- Modified the model Params to fetch all the parameter of openai chat and completions api
- Added the config params for the langfuse tracer like metadata, trace_name, session_id, release and version
Added the following tests
- Chat completion streaming
- Chat completion without streaming
- Completion with streaming
- Completion without streaming
- Few short learning with streaming.
- Few short learning without streaming
…the input structure for all the calls and output structure to send the entire message in chat calls
…se/langfuse-js into noble-varghese/openai-integration
expect(generation.totalTokens).toBeDefined()
expect(generation.promptTokens).toBeDefined()
expect(generation.completionTokens).toBeDefined()
expect(generation.input).toBeDefined()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add message array to test here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean statusMessage ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added statusMessage. Did not find anything called message. Input and output are anyway being validated.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant checking whether input is correct

langfuse/test/langfuse-openai.spec.ts Outdated Show resolved Hide resolved
expect(generation.output).toMatch(content)
}, 10000);

it("Few Short learning with streaming", async () => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this test add?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This tests the scenario where multiple inputs are sent in the messages of chat API. Like a conversational message. Tests if the integration captures it in the trace.,

langfuse/test/langfuse-openai.spec.ts Outdated Show resolved Hide resolved
package-lock.json Outdated Show resolved Hide resolved
modelParams: Record<string, any>
}

const parseInputArgs = (args: Record<string, any>): InputArgsData => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could map tools / toolchoice etc to metadata

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

functions, function_call, tools and tool_choice are being sent as part of the input right now. Should it remain the same and additionally send it in the metadata as well ?
Or should it be sent only in metadata. ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UI doesn't display tools and tool_choice is regular dislay, but in the json mode it is visible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In input is correct, need to fix the front-end rendering when tools are on top-level, currently only rendered well when they are within a single message.

Makes sense to not make them to metadata, agree!


createGeneration(
output?: string,
usage?: Record<string, any>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to type usage more strictly? promptTokens, completionTokens

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does make sense to have typing there. Realised that langfuse doesn't take the openai usage details directly. Have updated in code and added tests for the same :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What kind of translation do you need to do here? generally we tried to take the openai object as it is

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more like a UI test case, which checks if the UI is rendering the chats properly. Backend test doesn't make a lot of sense actually.
Additionally, we can always confirm that the wrapper is not just taking the first element in the array.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing this test from list.

langfuse/test/langfuse-openai.spec.ts Outdated Show resolved Hide resolved
}
}

interface WrapperConfig {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think going for camelcase here would be a better fit to the rest of the JS SDK (even though the openai sdk uses snake case, what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. This makes sense. We use camel casing on all the other places and since this is our wrapper, it would make sense to keep it as camel cased.

@noble-varghese noble-varghese marked this pull request as ready for review April 5, 2024 06:53
@noble-varghese noble-varghese changed the title WIP: implementing openai wrapper feat: implementing openai wrapper Apr 5, 2024
@hassiebp hassiebp changed the base branch from main to hassieb/lfe-784-feat-add-openai-integration April 12, 2024 07:09
@hassiebp hassiebp merged commit b88776b into hassieb/lfe-784-feat-add-openai-integration Apr 12, 2024
9 checks passed
@hassiebp hassiebp deleted the noble-varghese/openai-integration branch April 12, 2024 07:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants