k-nearest neighbor (kNN) search
editk-nearest neighbor (kNN) search
editA k-nearest neighbor (kNN) search finds the k nearest vectors to a query vector, as measured by a similarity metric.
Common use cases for kNN include:
- Relevance ranking based on natural language processing (NLP) algorithms
- Product recommendations and recommendation engines
- Similarity search for images or videos
Prerequisites
edit-
To run a kNN search, you must be able to convert your data into meaningful vector values. You create these vectors outside of Elasticsearch and add them to documents as
dense_vector
field values. Queries are represented as vectors with the same dimension.Design your vectors so that the closer a document’s vector is to a query vector, based on a similarity metric, the better its match.
-
To complete the steps in this guide, you must have the following index privileges:
-
create_index
ormanage
to create an index with adense_vector
field -
create
,index
, orwrite
to add data to the index you created -
read
to search the index
-
kNN methods
editElasticsearch supports two methods for kNN search:
- [preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. Approximate kNN using the kNN search API
-
Exact, brute-force kNN using a
script_score
query with a vector function
In most cases, you’ll want to use approximate kNN. Approximate kNN offers lower latency at the cost of slower indexing and imperfect accuracy.
Exact, brute-force kNN guarantees accurate results but doesn’t scale well with
large datasets. With this approach, a script_score
query must scan each
matching document to compute the vector function, which can result in slow
search speeds. However, you can improve latency by using a query
to limit the number of matching documents passed to the function. If you
filter your data to a small subset of documents, you can get good search
performance using this approach.
Approximate kNN
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
To run an approximate kNN search, use the kNN search API to search a
dense_vector
field with indexing enabled.
-
Explicitly map one or more
dense_vector
fields. Approximate kNN search requires the following mapping options:-
An
index
value oftrue
. -
A
similarity
value. This value determines the similarity metric used to score documents based on similarity between the query and document vector. For a list of available metrics, see thesimilarity
parameter documentation.
PUT my-approx-knn-index { "mappings": { "properties": { "my-image-vector": { "type": "dense_vector", "dims": 5, "index": true, "similarity": "l2_norm" }, "my-tag": { "type": "keyword" } } } }
-
An
-
Index your data.
POST my-approx-knn-index/_bulk?refresh=true { "index": { "_id": "1" } } { "my-image-vector": [230.0, 300.33, -34.8988, 15.555, -200.0], "my-tag": "cow.jpg" } { "index": { "_id": "2" } } { "my-image-vector": [-0.5, 100.0, -13.0, 14.8, -156.0], "my-tag": "moose.jpg" } { "index": { "_id": "3" } } { "my-image-vector": [0.5, 111.3, -13.0, 14.8, -156.0], "my-tag": "rabbit.jpg" } ...
-
Run the search using the kNN search API.
GET my-approx-knn-index/_knn_search { "knn": { "field": "my-image-vector", "query_vector": [-0.5, 90.0, -10, 14.8, -156.0], "k": 10, "num_candidates": 100 }, "fields": [ "my-image-vector", "my-tag" ] }
Support for approximate kNN search was added in version 8.0. Before
this, dense_vector
fields did not support enabling index
in the mapping.
If you created an index prior to 8.0 containing dense_vector
fields, then to
support approximate kNN search the data must be reindexed using a new field
mapping that sets index: true
.
Tune approximate kNN for speed or accuracy
editTo gather results, the kNN search API finds a num_candidates
number of
approximate nearest neighbor candidates on each shard. The search computes the
similarity of these candidate vectors to the query vector, selecting the k
most similar results from each shard. The search then merges the results from
each shard to return the global top k
nearest neighbors.
You can increase num_candidates
for more accurate results at the cost of
slower search speeds. A search with a high value for num_candidates
considers more candidates from each shard. This takes more time, but the
search has a higher probability of finding the true k
top nearest neighbors.
Similarly, you can decrease num_candidates
for faster searches with
potentially less accurate results.
Indexing considerations
editElasticsearch shards are composed of segments, which are internal storage elements in the index. For approximate kNN search, Elasticsearch stores the dense vector values of each segment as an HNSW graph. Indexing vectors for approximate kNN search can take substantial time because of how expensive it is to build these graphs. You may need to increase the client request timeout for index and bulk requests.
Force merging the index to a single segment can improve kNN search latency. With only one segment, the search needs to check a single, all-inclusive HNSW graph. When there are multiple segments, kNN search must check several smaller HNSW graphs as it searches each segment after another. You should only force merge an index if it is no longer being written to.
Limitations for approximate kNN search
edit- You can’t run an approximate kNN search on a filtered alias.
-
You can’t run an approximate kNN search on a
dense_vector
field within anested
mapping. - You can’t currently use the Query DSL to filter documents for an approximate kNN search. If you need to filter the documents, consider using exact kNN instead.
-
Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved search speed. This means the results returned are not always the true k closest neighbors.
Exact kNN
editTo run an exact kNN search, use a script_score
query with a vector function.
-
Explicitly map one or more
dense_vector
fields. If you don’t intend to use the field for approximate kNN, omit theindex
mapping option or set it tofalse
. This can significantly improve indexing speed.PUT my-exact-knn-index { "mappings": { "properties": { "my-product-vector": { "type": "dense_vector", "dims": 5, "index": false }, "my-price": { "type": "long" } } } }
-
Index your data.
POST my-exact-knn-index/_bulk?refresh=true { "index": { "_id": "1" } } { "my-product-vector": [230.0, 300.33, -34.8988, 15.555, -200.0], "my-price": 1599 } { "index": { "_id": "2" } } { "my-product-vector": [-0.5, 100.0, -13.0, 14.8, -156.0], "my-price": 799 } { "index": { "_id": "3" } } { "my-product-vector": [0.5, 111.3, -13.0, 14.8, -156.0], "my-price": 1099 } ...
-
Use the search API to run a
script_score
query containing a vector function.To limit the number of matched documents passed to the vector function, we recommend you specify a filter query in the
script_score.query
parameter. If needed, you can use amatch_all
query in this parameter to match all documents. However, matching all documents can significantly increase search latency.GET my-exact-knn-index/_search { "query": { "script_score": { "query" : { "bool" : { "filter" : { "range" : { "my-price" : { "gte": 1000 } } } } }, "script": { "source": "cosineSimilarity(params.queryVector, 'my-product-vector') + 1.0", "params": { "queryVector": [-0.5, 90.0, -10, 14.8, -156.0] } } } } }