Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inner monologue example #610

Merged
merged 6 commits into from
Apr 10, 2023
Merged

Conversation

andrew-openai
Copy link
Contributor

@andrew-openai andrew-openai commented Apr 8, 2023

Inner monologue CoT increase 3.5 accuracy on born-first from 63% -> 93%

Comment on lines 19 to 20
self.cot_template = "\nBefore answering, I will reason in a step-by-step manner as to get the right answer, then conclude with the answer."
self.extract_answer_template = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two variables could come from the constructor, which would make it much more customizable.

self.extract_answer_template = (
"\nGiven the above reasoning, the answer in the format requested by the question is:"
)
self.openai_completion_fn = OpenAICompletionFn(**kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we allow using any CompletionFn from the registry here?

return [self.response.strip()]


class OpenAIInnerMonologueCompletionFn:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we name this something like OpenAIChainOfThoughtCompletionFn? Usually see this technique referred to as simply "chain of thought".

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 but also lol

@jasonwei20
Copy link
Collaborator

How do you parse the answer?

@andrew-openai
Copy link
Contributor Author

How do you parse the answer?

I use the model with an extraction prompt:

        self.extract_answer_template = (
            "Given the above reasoning, the answer in the format requested by the question is:"
        )

cot_completion_fn=None,
extract_completion_fn=None,
registry_path=None,
extra_options=None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add type hints for these args


# This model will extract the answer from the chain of thought
self.extract_answer_template = extract_answer_template
self.extract_completion_fn_instance = registry.make_completion_fn(extract_completion_fn)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For simplicity, should we just have one completion_fn variable for both cot and extract? (until there's a need for using different instances)


self.extra_options = extra_options

registry = Registry()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally this would be the same instance that is created by oaieval rather than having to reconstruct it here. One way to do it could be to pass registry through to the CompletionFn constructor as a kwarg via Registry.make_completion_fn. But then it seems we would have to rely on the correct naming which seems not great.

Any other ideas?

@andrew-openai andrew-openai merged commit e621b6f into evals_refactor Apr 10, 2023
@andrew-openai andrew-openai deleted the andrew/inner_monologue_example branch April 10, 2023 22:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants