You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using LLama2 and LLama3 but the predicted output for a classification task is correct but after the correct answer it generates more such pairs though i have given a single example like the case below
So My question is is there a way to control the output that only the single example output is generated?
class CommandClassifierSignature(dspy.Signature):
"""classify sentence among return to launch, enable external navigation guidance, assign system and component id to primary controller, start the mission, request vehicle for home position, start Logging, stop logging, Start VTOL Transition, Version banner request, request autoquad version, send mid level commands"""
text = dspy.InputField(desc = "Please choose only one of the following actions that this sentence describes:")
label = dspy.OutputField(desc = """Answer with single option only""")
sigCommandClassification = CommandClassifierSignature
class CommandClassifierCoT(dspy.ChainOfThought):
def __init__(self):
super().__init__(sigCommandClassification)
self.prog = dspy.Predict(sigCommandClassification)
def forward(self, text):
return self.prog(text=text)
command_classifier_CoT = CommandClassifierCoT()
pred_CoT = command_classifier_CoT(text = example.text)
//PRINT OF THE EXAMPLE PASSED to PREDICT
\Text:
Ask the vehicle where its home position is.
\Gold Classification:
request vehicle for home position
OUTPUT PREDICTION IS:
classify sentence among return to launch, enable external navigation guidance, assign system and component id to primary controller, start the mission, request vehicle for home position, start Logging, stop logging, Start VTOL Transition, Version banner request, request autoquad version, send mid level commands
Follow the following format.
Text: Please choose only one of the following actions that this sentence describes:
Label: Answer with single option only
Text: Ask the vehicle where its home position is.
Label: request vehicle for home position
Text: Start logging the vehicle's data.
Label: start Logging
Text: Stop logging the vehicle's data.
Label: stop logging
Text: Start the VTOL (Vertical Takeoff and Landing) transition.
Label: Start VTOL Transition
Text: Request the vehicle's version banner.
Label: Version banner request
Text: Request the autoquad version.
Label: request autoquad version
Text: Send mid-level commands to the vehicle.
Label: send mid level commands
Seconding @tom-doerr 's response here. #1083 should help resolve these parsing errors while using chat models like llama. Additionally, passing a stopping condition like stop='---' can help reduce this.
I am using LLama2 and LLama3 but the predicted output for a classification task is correct but after the correct answer it generates more such pairs though i have given a single example like the case below
So My question is is there a way to control the output that only the single example output is generated?
//PRINT OF THE EXAMPLE PASSED to PREDICT
\Text:
Ask the vehicle where its home position is.
\Gold Classification:
request vehicle for home position
OUTPUT PREDICTION IS:
The text was updated successfully, but these errors were encountered: