You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a task that I want to add. The task is about answering binary questions. The only difference from other tasks is that the second question requires knowing the response for the first one. I use loglikelihood strategy, so I pass two possible continuations and for question 1 I decide on the answer of the model based on output log-probs of these continuations. Then a put this question and the model answer into the prompt for the next question. Like storing history of model answering on questions. Each time I pass into the model the question and all previously answered questions with answers the model considered more probable.
Is there a way to implement such a task without adding any code outside task.py or task.yaml?
The text was updated successfully, but these errors were encountered:
I have similar problem when i implement GPQA with CoT.In the paper, CoT needs to obtain the output of the first model first and add it to the prompt as the final prompt. The details are as follows:
I have a task that I want to add. The task is about answering binary questions. The only difference from other tasks is that the second question requires knowing the response for the first one. I use loglikelihood strategy, so I pass two possible continuations and for question 1 I decide on the answer of the model based on output log-probs of these continuations. Then a put this question and the model answer into the prompt for the next question. Like storing history of model answering on questions. Each time I pass into the model the question and all previously answered questions with answers the model considered more probable.
Is there a way to implement such a task without adding any code outside task.py or task.yaml?
The text was updated successfully, but these errors were encountered: