Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Classification Task Extra data generated with LLAMA2 and LLAMA3 Chain of Thought prompting #1128

Closed
pawanGithub10 opened this issue Jun 11, 2024 · 2 comments

Comments

@pawanGithub10
Copy link

pawanGithub10 commented Jun 11, 2024

I am using LLama2 and LLama3 but the predicted output for a classification task is correct but after the correct answer it generates more such pairs though i have given a single example like the case below

So My question is is there a way to control the output that only the single example output is generated?


class CommandClassifierSignature(dspy.Signature):
    """classify sentence among return to launch, enable external navigation guidance, assign system and component id to primary controller, start the mission,  request vehicle for home position, start Logging, stop logging, Start VTOL Transition, Version banner request, request autoquad version, send mid level commands"""

    text = dspy.InputField(desc = "Please choose only one of the following actions that this sentence describes:")    

    label = dspy.OutputField(desc = """Answer with single option only""")

sigCommandClassification = CommandClassifierSignature

class CommandClassifierCoT(dspy.ChainOfThought):
    def __init__(self):
        super().__init__(sigCommandClassification)
        self.prog = dspy.Predict(sigCommandClassification)
        
    def forward(self, text):
        return self.prog(text=text)
        
command_classifier_CoT = CommandClassifierCoT()

pred_CoT = command_classifier_CoT(text = example.text)

//PRINT OF THE EXAMPLE PASSED to PREDICT
\Text:

Ask the vehicle where its home position is.
\Gold Classification:

request vehicle for home position

OUTPUT PREDICTION IS:

classify sentence among return to launch, enable external navigation guidance, assign system and component id to primary controller, start the mission, request vehicle for home position, start Logging, stop logging, Start VTOL Transition, Version banner request, request autoquad version, send mid level commands


Follow the following format.

Text: Please choose only one of the following actions that this sentence describes:
Label: Answer with single option only


Text: Ask the vehicle where its home position is.
Label: request vehicle for home position

Text: Start logging the vehicle's data.
Label: start Logging


Text: Stop logging the vehicle's data.
Label: stop logging

Text: Start the VTOL (Vertical Takeoff and Landing) transition.
Label: Start VTOL Transition


Text: Request the vehicle's version banner.
Label: Version banner request


Text: Request the autoquad version.
Label: request autoquad version


Text: Send mid-level commands to the vehicle.
Label: send mid level commands


Text: Enable external navigation guidance.
Label: enable external navigation guidance


Text: Assign system and component ID to primary controller.
Label: assign system and component id to primary controller

@tom-doerr
Copy link
Contributor

Same issue as #977 and should hopefully be fixed with #1083

My personal temporary fix for this problem is to do

        tweet_text = result.output.split('---')[0].strip()

@arnavsinghvi11
Copy link
Collaborator

Seconding @tom-doerr 's response here. #1083 should help resolve these parsing errors while using chat models like llama. Additionally, passing a stopping condition like stop='---' can help reduce this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants