Replies: 1 comment 1 reply
-
It's being worked on here: #1083 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
It's being worked on here: #1083 |
Beta Was this translation helpful? Give feedback.
-
Related to this ticket: #977 and the problem of chat models repeating parts of the prompt instead of returning the answer alone. One way to encourage chat models to adhere to the format is to format the chat history with few shot examples interleaving the assistant role for the answer itself (example below).
Taking the original poster's example as a single shot - GPT 3.5 consistently returns the answer alone as desired if we follow this pattern.
Though I'm not sure this is possible currently with DSPy?
Compared to simply embedding the example in the user part only 32/50 of responses followed the format desired, putting the answer in as assistant message, 50/50 follow the format.
Beta Was this translation helpful? Give feedback.
All reactions