-
-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a Direct API to test out questions for LLM RAG without using giskard Hub #1856
Closed
1 task done
Labels
question
Further information is requested
Comments
Hey @aldrinjenson we've got plenty of options to execute your tests in code instead of on the visual interface. The main purpose of the Hub is to:
|
Thank You. I was able to integrate my code with pytest. Thanks again! |
Great, happy to hear that! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Checklist
issues
.❓ Question
Hi, I noticed that in order to add domain specific tests, currently we make heavy use of giskard hub.
Is it possible to do it without the hub?
Currently in order to add new questions, I'm having to create a dataframe out of them with code, upload this to giskard hub, go to testing->test-suite -> Add test Suite -> Choose the new dataset uploaded -> click add to test-suite
Then I'd need to fetch this test suite from giskard hub dynamiclly in code based on id and run them.
Personally this seems a bit too long winded to me. Can we simplify by optionally removing the giskard hub part ?
eg: maybe if I can read some questions from a yaml file maybe with some optional configuration parameter (similar to how promptfoo does) and have my LLM chain be evaluated using giskard on these questions?
My exact use case is that I'd prefer the test cases to be bundled together with my application codebase -Similar to how it is in a traditional software engineering project.
In fact, it'd be really great if we could also export and store the questions generated by giskard during the scan in a yaml file as well!
Then I could just add more questions based on my custom domain specific usecases in other yaml files itself and have them be tested from CLI using giskard/pytest; without having to be dependent on having the hub be running all the time.
Please do let me know if something like this is possible.
Thank You
The text was updated successfully, but these errors were encountered: