Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store prompt in Answer and eval dataframe #4338

Closed
tstadel opened this issue Mar 6, 2023 · 0 comments · Fixed by #4341
Closed

Store prompt in Answer and eval dataframe #4338

tstadel opened this issue Mar 6, 2023 · 0 comments · Fixed by #4341
Labels
topic:eval type:feature New feature or request

Comments

@tstadel
Copy link
Member

tstadel commented Mar 6, 2023

Is your feature request related to a problem? Please describe.
For evaluating and optimizing generative pipelines it's essential to know which prompt (incl. full context) was used for each result. E.g. was the relevant document part of the prompt?
Currently we only have this info for PromptNode in the debug output per pipeline run. However we are lacking it on the result (i.e. Answer) level. Additionally OpenAIAnswerGenerator doesn't even provide this information. Pipeline.eval does not store the PromptNode prompt, so any analysis of the EvaluationResult is lacking the prompt.

Describe the solution you'd like

  • store prompt for PromptNode and OpenAIAnswerGenerator in Answer objects (e.g. meta or dedicated field)
  • store prompt to EvaluationResult dataframes

Describe alternatives you've considered

Additional context
Add any other context or screenshots about the feature request here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic:eval type:feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants