Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a Direct API to test out questions for LLM RAG without using giskard Hub #1856

Closed
1 task done
aldrinjenson opened this issue Mar 20, 2024 · 3 comments
Closed
1 task done
Labels
question Further information is requested

Comments

@aldrinjenson
Copy link

aldrinjenson commented Mar 20, 2024

Checklist

  • I've searched the project's issues.

❓ Question

Hi, I noticed that in order to add domain specific tests, currently we make heavy use of giskard hub.
Is it possible to do it without the hub?

Currently in order to add new questions, I'm having to create a dataframe out of them with code, upload this to giskard hub, go to testing->test-suite -> Add test Suite -> Choose the new dataset uploaded -> click add to test-suite
Then I'd need to fetch this test suite from giskard hub dynamiclly in code based on id and run them.
Personally this seems a bit too long winded to me. Can we simplify by optionally removing the giskard hub part ?

eg: maybe if I can read some questions from a yaml file maybe with some optional configuration parameter (similar to how promptfoo does) and have my LLM chain be evaluated using giskard on these questions?

My exact use case is that I'd prefer the test cases to be bundled together with my application codebase -Similar to how it is in a traditional software engineering project.
In fact, it'd be really great if we could also export and store the questions generated by giskard during the scan in a yaml file as well!

Then I could just add more questions based on my custom domain specific usecases in other yaml files itself and have them be tested from CLI using giskard/pytest; without having to be dependent on having the hub be running all the time.

Please do let me know if something like this is possible.

Thank You

@aldrinjenson aldrinjenson added the question Further information is requested label Mar 20, 2024
@luca-martial
Copy link
Contributor

Hey @aldrinjenson we've got plenty of options to execute your tests in code instead of on the visual interface. The main purpose of the Hub is to:

  • Collaborate with several people from different backgrounds on your test suites
  • Get smart assistance for adding additional tests that you may have missed
  • Visualize executions and compare them in a more visually-friendly way
  • Access debugging-related features

@aldrinjenson
Copy link
Author

Thank You.

I was able to integrate my code with pytest.
Now I can read questions from yaml or JSON and use giskard to evaluate them based on these metrics
Works great for my use case.

Thanks again!

@luca-martial
Copy link
Contributor

Great, happy to hear that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Development

No branches or pull requests

2 participants