Looking for a free, local, open source RAG solution for running a reference library with 1000s of technical PDFs and word docs. Tried the Ollama + open webui, Ollama+Anything LLM with opensource models such as Llama3.2 etc. As expected the more documents we feed the lower the accuracy. Doing it for a bunch of senior citizens who still love geeking out.
I am a medical student with thousands of pdfs, various anki databases, video conferences, audio recordings, markdown notes etc. It can query into all of them and return extremely high quality output with sources to each original document.
It's still in alpha though and there's only 0.5 user beside me that I know of so there are bugs that have yet to be found!
You can use BerryDB for doing this use case at scale. BerryDB is a JSON native database that can ingest PDFs, images, etc and it has a built in semantic layer (for labeling) so that way you can build your knowledge database with entities and relationships. This will ground your knowledge with entities and accuracy scales very well with large number of documents
It provides APIs to extract paragraphs or tables from your PDFs in bulk, You can also separately do bulk labeling (say classification, NER and other labeling types). Once you have a knowledge database, it creates 4 indexes on top of your JSON data layer - db index for metadata search, full text search index, annotation index and vector index, so that way you can perform any search operation including hybrid search
The fact that your data layer is in JSON, it gives you infinite flexibility to add new snippets of knowledge or new labels and improve accuracy over time.
> expected the more documents we feed the lower the accuracy
Not surprising!
The LLM itself is the least important bit as long as it’s serviceable.
Depending on your goal you need to have a specific RAG strategy.
How are you breaking up the documents? Are the documents consistently formatted to make breaking them up uniform?
Do you need to do some preprocessing to make them uniform?
When you retrieve documents how many do you stuff into your prompt as context?
Do you stuff the same top N chunks from a single prompt or do you have a tailored prompt chain retrieving different resourced based on the prompt and desired output?
> How are you breaking up the documents? Are the documents consistently formatted to make breaking them up uniform? Do you need to do some preprocessing to make them uniform?
> When you retrieve documents how many do you stuff into your prompt as context?
> Do you stuff the same top N chunks from a single prompt or do you have a tailored prompt chain retrieving different resourced based on the prompt and desired output?
Wouldn't these questions be answered by the RAG solution the OP is asking for?
Yes! We can definitely help with this. Khoj lets you chat with your documents, indexing your private knowledge base for local RAG with any open source (or foundation) model.
You can make it as 'fancy' as you want, and use speech-to-text, image generation, web scraping, custom agents.
Let me know if you run into any issues? I'd love to get this setup for senior citizens! You can reach me at saba at khoj.dev.
I would look at articles on building an open source RAG pipeline. Generation (model) is the last in a series of important steps -- you have options to choose from (retrieval, storage, etc) in each component step. Those decisions will affect the accuracy you mention.
Langchain, llamaindex have good resources on building such a pipeline the last I checked
My sentiments exactly, and given how widespread a need RAG is, I'm extremely surprised that we don't have something solid and clearly a leader in the space yet. We don't even seem to have two or three! It's "pick one of these million side-projects".
I have a few tabs open that I haven't had a chance to try:
https://github.com/Mintplex-Labs/anything-llm
https://github.com/Bin-Huang/chatbox
https://github.com/saeedezzati/superpower-chatgpt
reply