You can think of the DocumentStore as a "database" that:
- stores your texts and meta data
- provides them to the retriever at query time
There are different DocumentStores in Haystack to fit different use cases and tech stacks.
Initialising a new DocumentStore within Haystack is straight forward.
<Disclosures options={[ { title: "Elasticsearch", content: (
If you have Docker set up, we recommend pulling the Docker image and running it.
Next you can initialize the Haystack object that will connect to this instance.docker pull docker.elastic.co/elasticsearch/elasticsearch:7.9.2
docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.9.2
document_store = ElasticsearchDocumentStore()
OpenDistroElasticsearchDocumentStore
class.
You can initialize the Haystack object that will connect to this instance as follows:
from haystack.document_store import MilvusDocumentStore
document_store = MilvusDocumentStore()
FAISS document stores can be saved to disk and reloaded:from haystack.document_store import FAISSDocumentStore
document_store = FAISSDocumentStore(faiss_index_factory_str="Flat")
Whilefrom haystack.document_store import FAISSDocumentStore
document_store = FAISSDocumentStore(faiss_index_factory_str="Flat")
# Generates two files: my_faiss_index.faiss and my_faiss_index.json
document_store.save("my_faiss_index.faiss")
# Looks for the two files generated above
new_document_store = FAISSDocumentStore.load("my_faiss_index.faiss")
assert new_document_store.faiss_index_factory_str == "Flat"
my_faiss_index.faiss
contains the index, my_faiss_index.json
contains the parameters used to inizialize it (like faiss_index_factory_store).
This configuration file is necessary for load() to work. It simply contains
the initial parameters in a JSON format.For example, a hand-written configuration file for the above FAISS index could look like:
{
faiss_index_factory_store: 'Flat'
}
from haystack.document_store import InMemoryDocumentStore
document_store = InMemoryDocumentStore()
from haystack.document_store import SQLDocumentStore
document_store = SQLDocumentStore()
docker run -d -p 8080:8080 --env AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED='true' --env PERSISTENCE_DATA_PATH='/var/lib/weaviate' semitechnologies/weaviate:1.4.0
Each DocumentStore constructor allows for arguments specifying how to connect to existing databases and the names of indexes. See API documentation for more info.from haystack.document_store import WeaviateDocumentStore
document_store = WeaviateDocumentStore()
DocumentStores expect Documents in dictionary form, like that below.
They are loaded using the DocumentStore.write_documents()
method.
See Preprocessing for more information on the cleaning and splitting steps that will help you maximize Haystack's performance.
from haystack.document_store import ElasticsearchDocumentStore
document_store = ElasticsearchDocumentStore()
dicts = [
{
'text': DOCUMENT_TEXT_HERE,
'meta': {'name': DOCUMENT_NAME, ...}
}, ...
]
document_store.write_documents(dicts)
Haystack allows for you to write store documents in an optimised fashion so that query times can be kept low.
For sparse, keyword based retrievers such as BM25 and TF-IDF,
you simply have to call DocumentStore.write_documents()
.
The creation of the inverted index which optimises querying speed is handled automatically.
document_store.write_documents(dicts)
For dense neural network based retrievers like Dense Passage Retrieval, or Embedding Retrieval, indexing involves computing the Document embeddings which will be compared against the Query embedding.
The storing of the text is handled by DocumentStore.write_documents()
and the computation of the
embeddings is started by DocumentStore.update_embeddings()
.
document_store.write_documents(dicts)
document_store.update_embeddings(retriever)
This step is computationally intensive since it will engage the transformer based encoders. Having GPU acceleration will significantly speed this up.
The Document Stores have different characteristics. You should choose one depending on the maturity of your project, the use case and technical environment:
<Disclosures options={[ { title: "Elasticsearch", content: (
- Fast & accurate sparse retrieval with many tuning options
- Basic support for dense retrieval
- Production-ready
- Support also for Open Distro
- Slow for dense retrieval with more than ~ 1 Mio documents
- Scalable DocumentStore that excels at handling vectors (hence suited to dense retrieval methods like DPR)
- Encapsulates multiple ANN libraries (e.g. FAISS and ANNOY) and provides added reliability
- Runs as a separate service (e.g. a Docker container)
- Allows dynamic data management
- No efficient sparse retrieval
- Fast & accurate dense retrieval
- Highly scalable due to approximate nearest neighbour algorithms (ANN)
- Many options to tune dense retrieval via different index types (more info here)
- No efficient sparse retrieval
- Simple
- Exists already in many environments
- Only compatible with minimal TF-IDF Retriever
- Bad retrieval performance
- Not recommended for production
- Simple & fast to test
- No database requirements
- Supports MySQL, PostgreSQL and SQLite
- Not scalable
- Not persisting your data on disk
- Simple vector search
- Stores everything in one place: documents, meta data and vectors - so less network overhead when scaling this up
- Allows combination of vector search and scalar filtering, i.e. you can filter for a certain tag and do dense retrieval on that subset
- Less options for ANN algorithms than FAISS or Milvus
- No BM25 / Tf-idf retrieval
Restricted environment: Use the InMemoryDocumentStore
, if you are just giving Haystack a quick try on a small sample and are working in a restricted environment that complicates running Elasticsearch or other databases
Allrounder: Use the ElasticSearchDocumentStore
, if you want to evaluate the performance of different retrieval options (dense vs. sparse) and are aiming for a smooth transition from PoC to production
Vector Specialist: Use the MilvusDocumentStore
, if you want to focus on dense retrieval and possibly deal with larger datasets