-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Can the Haystack have variations? #44
Comments
Hey! You are able to specify the needle, the question, and the background context, so you can choose whatever you want We did this to enable others to do their own context |
What would you consider "fair" conditions for ad-hoc generation of haystack vs needle? Would there be tools to help with the randomized construction of the haystack (and maybe averaged performance of multiple tests)?
Bonus question: can this be used to evaluate FOSS models as well (esp. those without OpenAI APIs)? Would Ollama or similar do the job? |
All those questions you asked are great research questions and I haven't seen anyone dig into them rigorously yet Yep you can definitely test out other models |
Since most of the needle in a "haystack" test injects a line into a pre-defined book text (that can be part of the original dataset), it can be hypothesized that the LLM simply is "smelling" for something that does not fit the context.
So, is it possible to create a "haystack" that is a mix of multiple articles, or just a list of one-liners such that it cannot guess?
The text was updated successfully, but these errors were encountered: