Skip to content

Commit

Permalink
New docs version (deepset-ai#159)
Browse files Browse the repository at this point in the history
* new docs version

* add version manually for testing

* add more information to read.me

* documentstores as dropdown

* dorpdown for pros and cons documentstore

* fix link

* missing )

* point again to haystack master
  • Loading branch information
PiffPaffM committed Sep 23, 2021
1 parent 3905847 commit d268938
Show file tree
Hide file tree
Showing 44 changed files with 5,381 additions and 455 deletions.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,10 @@ To preview docs that are on a non-master branch of the Haystack repo, you run th

### Updating docs after a release

When there's a new Haystack release, we need to create a directory for the new version within the local `/docs` directory. In this directory, we can write new overview and usage docs in .mdx (or manually copy over the ones from the previous version directory). Once this is done, the project will automatically fetch the reference and tutorial docs for the new version from GitHub. Bear in mind that a `menu.json` file needs to exist in every new version directory so that our Menu components know which page links to display. Additionally, the `referenceFiles` and `tutorialFiles` constants in `lib/constants` need to be updated with any new reference or tutorial docs that get created as part of a new release. **Lastly**, we have to update the constant specified in the `components/VersionSelect` component, so that we default to the new version when navigating between pages.
When there's a new Haystack release, we need to create a directory for the new version within the local `/docs` directory. In this directory, we can write new overview and usage docs in .mdx (or manually copy over the ones from the previous version directory). Once this is done, the project will automatically fetch the reference and tutorial docs for the new version from GitHub. Bear in mind that a `menu.json` file needs to exist in every new version directory so that our Menu components know which page links to display. Moreover, we need to point the links, which are pointing to the latest to version, to the new version. Currently, we do not have a script for this process. Therefore, you need to use the search function of your IDE. Additionally, the `referenceFiles` and `tutorialFiles` constants in `lib/constants` need to be updated with any new reference or tutorial docs that get created as part of a new release. In the [haystack](https://github.com/deepset-ai/haystack) repo, we have to release the api and tutorial docs by copying them to a new version folder as well. If you want to include here files from another brnach than master follwo **Preview from non-master branches**. **Lastly**, we have to update the constant specified in the `components/VersionSelect` component, so that we default to the new version when navigating between pages.
After releasing the docs, we need to release the benchmarks. Create a new version folder in the folder `benchmarks` and copy all folders from `latest` to the new folder.
If you know start the local sever and go to the new version, you will see the 404 page. We pull the version from the haystack release tags. Most likely, the newest version is not released yet. Therefore, you have to add it manually to the array `tagNames` in the function `getDocsVersions` by adding the command `tagNames.push('v0.10.0');`.


## Styling

Expand Down
40 changes: 40 additions & 0 deletions benchmarks/latest/map/retriever_map.json
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,46 @@
"model": "BM25 / Elasticsearch",
"n_docs": 1000,
"map": 74.20444712972909
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 1000,
"map": 92.95105322830891
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 10000,
"map": 89.8709701490436
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 100000,
"map": 86.54014997282701
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 1000,
"map": 92.76308330349686
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 10000,
"map": 89.00403653862938
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 100000,
"map": 85.7342431384476
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 500000,
"map": 80.85588135082547
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 500000,
"map": 77.5426462347698
}
]
}
14 changes: 14 additions & 0 deletions benchmarks/latest/performance/retriever_performance.json
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,20 @@
"index_speed": 115.61076852516383,
"query_speed": 38.80526238789059,
"map": 81.63864883662649
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 100000,
"index_speed": 70.05381128388427,
"query_speed": 15.306895223372484,
"map": 86.54014997282701
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 100000,
"index_speed": 70.31004397719536,
"query_speed": 24.95733865947408,
"map": 85.7342431384476
}
]
}
40 changes: 40 additions & 0 deletions benchmarks/latest/speed/retriever_speed.json
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,46 @@
"model": "BM25 / Elasticsearch",
"n_docs": 1000,
"query_speed": 282.95914917837337
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 1000,
"query_speed": 29.061163356184426
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 10000,
"query_speed": 24.834414667596725
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 100000,
"query_speed": 15.306895223372484
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 1000,
"query_speed": 29.10621389658101
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 10000,
"query_speed": 26.92417300437131
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 100000,
"query_speed": 24.95733865947408
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 500000,
"query_speed": 11.33271222977541
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 500000,
"query_speed": 24.13921492357397
}
]
}
204 changes: 204 additions & 0 deletions benchmarks/v0.10.0/map/retriever_map.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,204 @@
{
"chart_type": "LineChart",
"title": "Retriever Accuracy",
"subtitle": "mAP at different number of docs",
"description": "Here you can see how the mean avg. precision (mAP) of the retriever decays as the number of documents increases. The set up is the same as the above querying benchmark except that a varying number of negative documents are used to fill the document store.",
"columns": [
"n_docs",
"BM25 / Elasticsearch",
"DPR / Elasticsearch",
"DPR / FAISS (flat)",
"DPR / FAISS (HNSW)",
"DPR / Milvus (flat)",
"DPR / Milvus (HNSW)",
"Sentence Transformers / Elasticsearch"
],
"axis": [
{
"x": "Number of docs",
"y": "mAP"
}
],
"data": [
{
"model": "DPR / Elasticsearch",
"n_docs": 1000,
"map": 92.95105322830891
},
{
"model": "DPR / Elasticsearch",
"n_docs": 10000,
"map": 89.87097014904354
},
{
"model": "BM25 / Elasticsearch",
"n_docs": 10000,
"map": 66.26543444531747
},
{
"model": "Sentence Transformers / Elasticsearch",
"n_docs": 1000,
"map": 90.06638620360428
},
{
"model": "Sentence Transformers / Elasticsearch",
"n_docs": 10000,
"map": 87.11255142468549
},
{
"model": "DPR / FAISS (flat)",
"n_docs": 1000,
"map": 92.95105322830891
},
{
"model": "DPR / FAISS (flat)",
"n_docs": 10000,
"map": 89.87097014904354
},
{
"model": "DPR / FAISS (HNSW)",
"n_docs": 1000,
"map": 92.95105322830891
},
{
"model": "DPR / FAISS (HNSW)",
"n_docs": 10000,
"map": 89.51337675393017
},
{
"model": "DPR / Milvus (flat)",
"n_docs": 1000,
"map": 92.95105322830891
},
{
"model": "DPR / Milvus (flat)",
"n_docs": 10000,
"map": 89.87097014904354
},
{
"model": "DPR / Milvus (HNSW)",
"n_docs": 1000,
"map": 92.95105322830891
},
{
"model": "DPR / Milvus (HNSW)",
"n_docs": 10000,
"map": 88.24421129104469
},
{
"model": "DPR / Elasticsearch",
"n_docs": 100000,
"map": 86.54606328368976
},
{
"model": "DPR / Elasticsearch",
"n_docs": 500000,
"map": 80.86137228234091
},
{
"model": "BM25 / Elasticsearch",
"n_docs": 100000,
"map": 56.25299537353825
},
{
"model": "BM25 / Elasticsearch",
"n_docs": 500000,
"map": 45.595090262466535
},
{
"model": "Sentence Transformers / Elasticsearch",
"n_docs": 100000,
"map": 82.74686664920836
},
{
"model": "Sentence Transformers / Elasticsearch",
"n_docs": 500000,
"map": 76.49564526892904
},
{
"model": "DPR / FAISS (flat)",
"n_docs": 100000,
"map": 86.54606328368973
},
{
"model": "DPR / FAISS (flat)",
"n_docs": 500000,
"map": 80.86137228234091
},
{
"model": "DPR / FAISS (HNSW)",
"n_docs": 100000,
"map": 84.33419639513305
},
{
"model": "DPR / FAISS (HNSW)",
"n_docs": 500000,
"map": 75.73062475537202
},
{
"model": "DPR / Milvus (flat)",
"n_docs": 100000,
"map": 86.54606328368973
},
{
"model": "DPR / Milvus (flat)",
"n_docs": 500000,
"map": 80.86137228234091
},
{
"model": "DPR / Milvus (HNSW)",
"n_docs": 100000,
"map": 81.63864883662649
},
{
"model": "DPR / Milvus (HNSW)",
"n_docs": 500000,
"map": 73.57986207906387
},
{
"model": "BM25 / Elasticsearch",
"n_docs": 1000,
"map": 74.20444712972909
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 1000,
"map": 92.95105322830891
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 10000,
"map": 89.8709701490436
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 100000,
"map": 86.54014997282701
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 1000,
"map": 92.76308330349686
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 10000,
"map": 89.00403653862938
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 100000,
"map": 85.7342431384476
},
{
"model": "DPR / OpenSearch (flat)",
"n_docs": 500000,
"map": 80.85588135082547
},
{
"model": "DPR / OpenSearch (HNSW)",
"n_docs": 500000,
"map": 77.5426462347698
}
]
}
44 changes: 44 additions & 0 deletions benchmarks/v0.10.0/performance/reader_performance.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
{
"chart_type": "BarChart",
"title": "Reader Performance",
"subtitle": "Time and Accuracy Benchmarks",
"description": "Performance benchmarks of different Readers that can be used off-the-shelf in Haystack. Some models are geared towards speed, while others are more performance-focused. Accuracy is measured as F1 score and speed as passages/sec (with passages of 384 tokens). Each Reader is benchmarked using the SQuAD v2.0 development set, which contains 11866 question answer pairs. When tokenized using the BERT tokenizer and split using a sliding window approach, these become 12350 passages that are passed into the model. We set <i>max_seq_len=384</i> and <i>doc_stride=128</i>. These benchmarking tests are run using an AWS p3.2xlarge instance with a Nvidia V100 GPU with this <a href='https://github.com/deepset-ai/haystack/blob/master/test/benchmarks/reader.py'>script</a>. Please note that we are using the FARMReader class rather than the TransformersReader class. Also, the F1 measure that is reported here is in fact calculated on token level, rather than word level as is done in the official SQuAD script.",
"bars": "horizontal",
"columns": [
"Model",
"F1",
"Speed (passages/sec)"
],
"data": [
{
"F1": 82.58860575299658,
"Speed": 125.81040525892848,
"Model": "RoBERTa"
},
{
"F1": 78.87858491007042,
"Speed": 260.6443097981493,
"Model": "MiniLM"
},
{
"F1": 74.31182400443286,
"Speed": 121.08066567525722,
"Model": "BERT base"
},
{
"F1": 83.26306774734308,
"Speed": 42.21949937744112,
"Model": "BERT large"
},
{
"F1": 84.50422699207468,
"Speed": 42.07400844838985,
"Model": "XLM-RoBERTa"
},
{
"F1": 42.31925844723574,
"Speed": 222.91207128366702,
"Model": "DistilBERT"
}
]
}

0 comments on commit d268938

Please sign in to comment.