Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Can the Haystack have variations? #44

Open
BradKML opened this issue Apr 1, 2024 · 3 comments
Open

Question: Can the Haystack have variations? #44

BradKML opened this issue Apr 1, 2024 · 3 comments

Comments

@BradKML
Copy link

BradKML commented Apr 1, 2024

Since most of the needle in a "haystack" test injects a line into a pre-defined book text (that can be part of the original dataset), it can be hypothesized that the LLM simply is "smelling" for something that does not fit the context.
So, is it possible to create a "haystack" that is a mix of multiple articles, or just a list of one-liners such that it cannot guess?

@gkamradt
Copy link
Owner

gkamradt commented Apr 2, 2024

Hey! You are able to specify the needle, the question, and the background context, so you can choose whatever you want

We did this to enable others to do their own context

@BradKML
Copy link
Author

BradKML commented Apr 3, 2024

What would you consider "fair" conditions for ad-hoc generation of haystack vs needle? Would there be tools to help with the randomized construction of the haystack (and maybe averaged performance of multiple tests)?

  • Word length of the needle (Single sentence vs single paragraph)
  • The size of the haystack (relative to the needle, singular book or single whole anthology)
  • The variance of the haystack (single source vs multiple source shuffled into a collection)

Bonus question: can this be used to evaluate FOSS models as well (esp. those without OpenAI APIs)? Would Ollama or similar do the job?

@gkamradt
Copy link
Owner

gkamradt commented Apr 3, 2024

All those questions you asked are great research questions and I haven't seen anyone dig into them rigorously yet

Yep you can definitely test out other models

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants