A fast embedded library for approximate nearest neighbor search
AnnLite
is a lightweight and embeddable library for fast and filterable approximate nearest neighbor search (ANNS).
It allows to search for nearest neighbors in a dataset of millions of points with a Pythonic API.
Highlighted features:
-
🐥 Easy-to-use: a simple API is designed to be used with Python. It is easy to use and intuitive to set up to production.
-
🐎 Fast: the library uses a highly optimized approximate nearest neighbor search algorithm (HNSW) to search for nearest neighbors.
-
🔎 Filterable: the library allows you to search for nearest neighbors within a subset of the dataset.
-
🍱 Integration: Smooth integration with neural search ecosystem including Jina and DocArray, so that users can easily expose search API with gRPC and/or HTTP.
The library is easy to install and use. It is designed to be used with Python.
To use AnnLite, you need to first install it. The easiest way to install AnnLite is using pip
:
pip install -U annlite
or install from source:
python setup.py install
Before you start, you need to know some experience about DocArray.
AnnLite
is designed to be used with DocArray, so you need to know how to use DocArray
first.
For example, you can create a DocArray
with 1000
random vectors with 128
dimensions:
from docarray import DocumentArray
import numpy as np
docs = DocumentArray.empty(1000)
docs.embeddings = np.random.random([1000, 128]).astype(np.float32)
Then you can create an AnnIndexer
to index the created docs
and search for nearest neighbors:
from annlite import AnnLite
ann = AnnLite(128, metric='cosine', data_path="/tmp/annlite_data")
ann.index(docs)
Note that this will create a directory /tmp/annlite_data
to persist the documents indexed.
If this directory already exists, the index will be loaded from the directory.
And if you want to create a new index, you can delete the directory first.
Then you can search for nearest neighbors for some query docs with ann.search()
:
query = DocumentArray.empty(5)
query.embeddings = np.random.random([5, 128]).astype(np.float32)
result = ann.search(query)
Then, you can inspect the retrieved docs for each query doc inside query
matches:
for q in query:
print(f'Query {q.id}')
for k, m in enumerate(q.matches):
print(f'{k}: {m.id} {m.scores["cosine"]}')
Query ddbae2073416527bad66ff186543eff8
0: 47dcf7f3fdbe3f0b8d73b87d2a1b266f {'value': 0.17575037}
1: 7f2cbb8a6c2a3ec7be024b750964f317 {'value': 0.17735684}
2: 2e7eed87f45a87d3c65c306256566abb {'value': 0.17917466}
Query dda90782f6514ebe4be4705054f74452
0: 6616eecba99bd10d9581d0d5092d59ce {'value': 0.14570713}
1: d4e3147fc430de1a57c9883615c252c6 {'value': 0.15338594}
2: 5c7b8b969d4381f405b8f07bc68f8148 {'value': 0.15743542}
...
Or shorten the loop as one-liner using the element & attribute selector:
print(query['@m', ('id', 'scores__cosine')])
You can get specific document by its id:
doc = ann.get_doc_by_id('<doc_id>')
And you can also get the documents with limit
and offset
, which is useful for pagination:
docs = ann.get_docs(limit=10, offset=0)
Furthermore, you can also get the documents ordered by a specific column from the index:
docs = ann.get_docs(limit=10, offset=0, order_by='x', ascending=True)
Note: the order_by
column must be one of the columns
in the index.
After you have indexed the docs
, you can update the docs in the index by calling ann.update()
:
updated_docs = docs.sample(10)
updated_docs.embeddings = np.random.random([10, 128]).astype(np.float32)
ann.update(updated_docs)
And finally, you can delete the docs from the index by calling ann.delete()
:
to_delete = docs.sample(10)
ann.delete(to_delete)
To support search with filters, the annlite must be created with colums
parameter, which is a series of fields you want to filter by.
At the query time, the annlite will filter the dataset by providing conditions
for certain fields.
import annlite
# the column schema: (name:str, dtype:type, create_index: bool)
ann = annlite.AnnLite(128, columns=[('price', float)], data_path="/tmp/annlite_data")
Then you can insert the docs, in which each doc has a field price
with a float value contained in the tags
:
import random
docs = DocumentArray.empty(1000)
docs = DocumentArray(
[
Document(id=f'{i}', tags={'price': random.random()})
for i in range(1000)
]
)
docs.embeddings = np.random.random([1000, 128]).astype(np.float32)
ann.index(docs)
Then you can search for nearest neighbors with filtering conditions as:
query = DocumentArray.empty(5)
query.embeddings = np.random.random([5, 128]).astype(np.float32)
ann.search(query, filter={"price": {"$lte": 50}}, limit=10)
print(f'the result with filtering:')
for i, q in enumerate(query):
print(f'query [{i}]:')
for m in q.matches:
print(f'\t{m.id} {m.scores["euclidean"].value} (price={m.tags["price"]})')
The conditions
parameter is a dictionary of conditions. The key is the field name, and the value is a dictionary of conditions.
The query language is the same as MongoDB Query Language.
We currently support a subset of those selectors.
$eq
- Equal to (number, string)$ne
- Not equal to (number, string)$gt
- Greater than (number)$gte
- Greater than or equal to (number)$lt
- Less than (number)$lte
- Less than or equal to (number)$in
- Included in an array$nin
- Not included in an array
The query will be performed on the field if the condition is satisfied. The following is an example of a query:
-
A Nike shoes with white color
{ "brand": {"$eq": "Nike"}, "category": {"$eq": "Shoes"}, "color": {"$eq": "White"} }
We also support boolean operators
$or
and$and
:{ "$and": { "brand": {"$eq": "Nike"}, "category": {"$eq": "Shoes"}, "color": {"$eq": "White"} } }
-
A Nike shoes or price less than
100$
:{ "$or": { "brand": {"$eq": "Nike"}, "price": {"$lte": 100} } }
By default, the hnsw index is in memory. You can dump the index to data_path
by calling .dump()
:
from annlite import AnnLite
ann = AnnLite(128, metric='cosine', data_path="/path/to/data_path")
ann.index(docs)
ann.dump()
And you can restore the hnsw index from data_path
if it exists:
new_ann = AnnLite(128, metric='cosine', data_path="/path/to/data_path")
If you didn't dump the hnsw index, the index will be rebuilt from scratch. This will take a while.
The annlite supports the following distance metrics:
Distance | parameter | Equation |
---|---|---|
Euclidean | euclidean |
d = sqrt(sum((Ai-Bi)^2)) |
Inner product | inner_product |
d = 1.0 - sum(Ai*Bi) |
Cosine similarity | cosine |
d = 1.0 - sum(Ai*Bi) / sqrt(sum(Ai*Ai) * sum(Bi*Bi)) |
Note that inner product is not an actual metric. An element can be closer to some other element than to itself.
That allows some speedup if you remove all elements that are not the closest to themselves from the index, e.g.,
inner_product([1.0, 1.0], [1.0. 1.0]) < inner_product([1.0, 1.0], [2.0, 2.0])
The HNSW algorithm has several parameters that can be tuned to improve the search performance.
-
ef_search
- The size of the dynamic list for the nearest neighbors during search (default:50
). The larger the value, the more accurate the search results, but the slower the search speed. Theef_search
must be larger thanlimit
parameter insearch(..., limit)
. -
limit
- The maximum number of results to return (default:10
).
-
max_connection
- The number of bi-directional links created for every new element during construction (default:16
). Reasonable range is from2
to100
. Higher values works better for dataset with higher dimensionality and/or high recall. This parameter also affects the memory consumption during construction, which is roughlymax_connection * 8-10
bytes per stored element.As an example for
n_dim=4
random vectors optimalmax_connection
for search is somewhere around6
, while for high dimensional datasets, highermax_connection
are required (e.g.M=48-64
) for optimal performance at high recall. The rangemax_connection=12-48
is ok for the most of the use cases. Whenmax_connection
is changed one has to update the other parameters. Nonetheless,ef_search
andef_construction
parameters can be roughly estimated by assuming thatmax_connection * ef_{construction}
is a constant. -
ef_construction
: The size of the dynamic list for the nearest neighbors during construction (default:200
). Higher values give better accuracy, but increase construction time and memory consumption. At some point, increasingef_construction
does not give any more accuracy. To setef_construction
to a reasonable value, one can measure the recall: if the recall is lower than 0.9, then increaseef_construction
and re-run the search.
To set the parameters, you can define them when creating the annlite:
from annlite import AnnLite
ann = AnnLite(128, columns=[('price', float)], data_path="/tmp/annlite_data", ef_construction=200, max_connection=16)
One can run executor/benchmark.py
to get a quick performance overview.
Stored data | Indexing time | Query size=1 | Query size=8 | Query size=64 |
---|---|---|---|---|
10000 | 2.970 | 0.002 | 0.013 | 0.100 |
100000 | 76.474 | 0.011 | 0.078 | 0.649 |
500000 | 467.936 | 0.046 | 0.356 | 2.823 |
1000000 | 1025.506 | 0.091 | 0.695 | 5.778 |
Results with filtering can be generated from examples/benchmark_with_filtering.py
. This script should produce a table similar to:
Stored data | % same filter | Indexing time | Query size=1 | Query size=8 | Query size=64 |
---|---|---|---|---|---|
10000 | 5 | 2.869 | 0.004 | 0.030 | 0.270 |
10000 | 15 | 2.869 | 0.004 | 0.035 | 0.294 |
10000 | 20 | 3.506 | 0.005 | 0.038 | 0.287 |
10000 | 30 | 3.506 | 0.005 | 0.044 | 0.356 |
10000 | 50 | 3.506 | 0.008 | 0.064 | 0.484 |
10000 | 80 | 2.869 | 0.013 | 0.098 | 0.910 |
100000 | 5 | 75.960 | 0.018 | 0.134 | 1.092 |
100000 | 15 | 75.960 | 0.026 | 0.211 | 1.736 |
100000 | 20 | 78.475 | 0.034 | 0.265 | 2.097 |
100000 | 30 | 78.475 | 0.044 | 0.357 | 2.887 |
100000 | 50 | 78.475 | 0.068 | 0.565 | 4.383 |
100000 | 80 | 75.960 | 0.111 | 0.878 | 6.815 |
500000 | 5 | 497.744 | 0.069 | 0.561 | 4.439 |
500000 | 15 | 497.744 | 0.134 | 1.064 | 8.469 |
500000 | 20 | 440.108 | 0.152 | 1.199 | 9.472 |
500000 | 30 | 440.108 | 0.212 | 1.650 | 13.267 |
500000 | 50 | 440.108 | 0.328 | 2.637 | 21.961 |
500000 | 80 | 497.744 | 0.580 | 4.602 | 36.986 |
1000000 | 5 | 1052.388 | 0.131 | 1.031 | 8.212 |
1000000 | 15 | 1052.388 | 0.263 | 2.191 | 16.643 |
1000000 | 20 | 980.598 | 0.351 | 2.659 | 21.193 |
1000000 | 30 | 980.598 | 0.461 | 3.713 | 29.794 |
1000000 | 50 | 980.598 | 0.732 | 5.975 | 47.356 |
1000000 | 80 | 1052.388 | 1.151 | 9.255 | 73.552 |
Note that:
- query times presented are represented in seconds.
% same filter
indicates the amount of data that verifies a filter in the database.- For example, if
% same filter = 10
andStored data = 1_000_000
then it means100_000
example verify the filter.
- For example, if
If you already have experience with Jina and DocArray, you can start using AnnLite
right away.
Otherwise, you can check out this advanced tutorial to learn how to use AnnLite
: here in practice.
1. Why should I use AnnLite
?
AnnLite
is easy to use and intuitive to set up in production. It is also very fast and memory efficient, making it a great choice for approximate nearest neighbor search.
2. How do I use AnnLite
with Jina?
We have implemented an executor for AnnLite
that can be used with Jina.
from jina import Flow
with Flow().add(uses='jinahub:https://AnnLiteIndexer', uses_with={'n_dim': 128}) as f:
f.post('/index', inputs=docs)
- Does
AnnLite
support search with filters?
Yes.
You can find the documentation on Github and ReadTheDocs
We are also looking for contributors who want to help us improve: code, documentation, issues, feedback! Here is how you can get started:
- Have a look through GitHub issues labeled "Good first issue".
- Read our Contributor Covenant Code of Conduct
- Open an issue or submit your pull request!
AnnLite
is licensed under the Apache License 2.0.