Retriever
editRetriever
editA retriever is a specification to describe top documents returned from a
search. A retriever replaces other elements of the search API
that also return top documents such as query
and
knn
. A retriever may have child retrievers where a
retriever with two or more children is considered a compound retriever. This
allows for complex behavior to be depicted in a tree-like structure, called
the retriever tree, to better clarify the order of operations that occur
during a search.
Refer to Retrievers for a high level overview of the retrievers abstraction.
The following retrievers are available:
-
standard
- A retriever that replaces the functionality of a traditional query.
-
knn
- A retriever that replaces the functionality of a knn search.
-
rrf
- A retriever that produces top documents from reciprocal rank fusion (RRF).
-
text_similarity_reranker
- A retriever that enhances search results by re-ranking documents based on semantic similarity to a specified inference text, using a machine learning model.
Standard Retriever
editA standard retriever returns top documents from a traditional query.
Parameters:
edit-
query
-
(Optional, query object)
Defines a query to retrieve a set of top documents.
-
filter
-
(Optional, query object or list of query objects)
Applies a boolean query filter to this retriever where all documents must match this query but do not contribute to the score.
-
search_after
-
(Optional, search after object)
Defines a search after object parameter used for pagination.
-
terminate_after
-
(Optional, integer) Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.
Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.
-
sort
-
(Optional, sort object) A sort object that that specifies the order of matching documents.
-
min_score
-
(Optional,
float
)Minimum
_score
for matching documents. Documents with a lower_score
are not included in the top documents. -
collapse
-
(Optional, collapse object)
Collapses the top documents by a specified key into a single top document per key.
Restrictions
editWhen a retriever tree contains a compound retriever (a retriever with two or more child retrievers) the search after parameter is not supported.
Example
editresp = client.search( index="restaurants", retriever={ "standard": { "query": { "bool": { "should": [ { "match": { "region": "Austria" } } ], "filter": [ { "term": { "year": "2019" } } ] } } } }, ) print(resp)
const response = await client.search({ index: "restaurants", retriever: { standard: { query: { bool: { should: [ { match: { region: "Austria", }, }, ], filter: [ { term: { year: "2019", }, }, ], }, }, }, }, }); console.log(response);
GET /restaurants/_search { "retriever": { "standard": { "query": { "bool": { "should": [ { "match": { "region": "Austria" } } ], "filter": [ { "term": { "year": "2019" } } ] } } } } }
Opens the |
|
The |
|
The entry point for defining the search query. |
|
The |
|
The |
|
The |
|
The |
|
The |
|
The exact value to match in the |
kNN Retriever
editA kNN retriever returns top documents from a k-nearest neighbor search (kNN).
Parameters
edit-
field
-
(Required, string)
The name of the vector field to search against. Must be a
dense_vector
field with indexing enabled. -
query_vector
-
(Required if
query_vector_builder
is not defined, array offloat
)Query vector. Must have the same number of dimensions as the vector field you are searching against. Must be either an array of floats or a hex-encoded byte vector.
-
query_vector_builder
-
(Required if
query_vector
is not defined, query vector builder object)Defines a model to build a query vector.
-
k
-
(Required, integer)
Number of nearest neighbors to return as top hits. This value must be fewer than or equal to
num_candidates
. -
num_candidates
-
(Required, integer)
The number of nearest neighbor candidates to consider per shard. Needs to be greater than
k
, orsize
ifk
is omitted, and cannot exceed 10,000. Elasticsearch collectsnum_candidates
results from each shard, then merges them to find the topk
results. Increasingnum_candidates
tends to improve the accuracy of the finalk
results. Defaults toMath.min(1.5 * k, 10_000)
. -
filter
-
(Optional, query object or list of query objects)
Query to filter the documents that can match. The kNN search will return the top
k
documents that also match this filter. The value can be a single query or a list of queries. Iffilter
is not provided, all documents are allowed to match. -
similarity
-
(Optional, float)
The minimum similarity required for a document to be considered a match. The similarity value calculated relates to the raw
similarity
used. Not the document score. The matched documents are then scored according tosimilarity
and the providedboost
is applied.The
similarity
parameter is the direct vector similarity calculation.-
l2_norm
: also known as Euclidean, will include documents where the vector is within thedims
dimensional hypersphere with radiussimilarity
with origin atquery_vector
. -
cosine
,dot_product
, andmax_inner_product
: Only return vectors where the cosine similarity or dot-product are at least the providedsimilarity
.
Read more here: knn similarity search
-
Restrictions
editThe parameters query_vector
and query_vector_builder
cannot be used together.
Example
editresp = client.search( index="restaurants", retriever={ "knn": { "field": "vector", "query_vector": [ 10, 22, 77 ], "k": 10, "num_candidates": 10 } }, ) print(resp)
const response = await client.search({ index: "restaurants", retriever: { knn: { field: "vector", query_vector: [10, 22, 77], k: 10, num_candidates: 10, }, }, }); console.log(response);
GET /restaurants/_search { "retriever": { "knn": { "field": "vector", "query_vector": [10, 22, 77], "k": 10, "num_candidates": 10 } } }
Configuration for k-nearest neighbor (knn) search, which is based on vector similarity. |
|
Specifies the field name that contains the vectors. |
|
The query vector against which document vectors are compared in the |
|
The number of nearest neighbors to return as top hits. This value must be fewer than or equal to |
|
The size of the initial candidate set from which the final |
RRF Retriever
editAn RRF retriever returns top documents based on the RRF formula, equally weighting two or more child retrievers. Reciprocal rank fusion (RRF) is a method for combining multiple result sets with different relevance indicators into a single result set.
Parameters
edit-
retrievers
-
(Required, array of retriever objects)
A list of child retrievers to specify which sets of returned top documents will have the RRF formula applied to them. Each child retriever carries an equal weight as part of the RRF formula. Two or more child retrievers are required.
-
rank_constant
-
(Optional, integer)
This value determines how much influence documents in individual result sets per query have over the final ranked result set. A higher value indicates that lower ranked documents have more influence. This value must be greater than or equal to
1
. Defaults to60
. -
rank_window_size
-
(Optional, integer)
This value determines the size of the individual result sets per query. A higher value will improve result relevance at the cost of performance. The final ranked result set is pruned down to the search request’s size.
rank_window_size
must be greater than or equal tosize
and greater than or equal to1
. Defaults to thesize
parameter. -
filter
-
(Optional, query object or list of query objects)
Applies the specified boolean query filter to all of the specified sub-retrievers, according to each retriever’s specifications.
Example: Hybrid search
editA simple hybrid search example (lexical search + dense vector search) combining a standard
retriever with a knn
retriever using RRF:
resp = client.search( index="restaurants", retriever={ "rrf": { "retrievers": [ { "standard": { "query": { "multi_match": { "query": "Austria", "fields": [ "city", "region" ] } } } }, { "knn": { "field": "vector", "query_vector": [ 10, 22, 77 ], "k": 10, "num_candidates": 10 } } ], "rank_constant": 1, "rank_window_size": 50 } }, ) print(resp)
const response = await client.search({ index: "restaurants", retriever: { rrf: { retrievers: [ { standard: { query: { multi_match: { query: "Austria", fields: ["city", "region"], }, }, }, }, { knn: { field: "vector", query_vector: [10, 22, 77], k: 10, num_candidates: 10, }, }, ], rank_constant: 1, rank_window_size: 50, }, }, }); console.log(response);
GET /restaurants/_search { "retriever": { "rrf": { "retrievers": [ { "standard": { "query": { "multi_match": { "query": "Austria", "fields": [ "city", "region" ] } } } }, { "knn": { "field": "vector", "query_vector": [10, 22, 77], "k": 10, "num_candidates": 10 } } ], "rank_constant": 1, "rank_window_size": 50 } } }
Defines a retriever tree with an RRF retriever. |
|
The sub-retriever array. |
|
The first sub-retriever is a |
|
The second sub-retriever is a |
|
The rank constant for the RRF retriever. |
|
The rank window size for the RRF retriever. |
Example: Hybrid search with sparse vectors
editA more complex hybrid search example (lexical search + ELSER sparse vector search + dense vector search) using RRF:
resp = client.search( index="movies", retriever={ "rrf": { "retrievers": [ { "standard": { "query": { "sparse_vector": { "field": "plot_embedding", "inference_id": "my-elser-model", "query": "films that explore psychological depths" } } } }, { "standard": { "query": { "multi_match": { "query": "crime", "fields": [ "plot", "title" ] } } } }, { "knn": { "field": "vector", "query_vector": [ 10, 22, 77 ], "k": 10, "num_candidates": 10 } } ] } }, ) print(resp)
const response = await client.search({ index: "movies", retriever: { rrf: { retrievers: [ { standard: { query: { sparse_vector: { field: "plot_embedding", inference_id: "my-elser-model", query: "films that explore psychological depths", }, }, }, }, { standard: { query: { multi_match: { query: "crime", fields: ["plot", "title"], }, }, }, }, { knn: { field: "vector", query_vector: [10, 22, 77], k: 10, num_candidates: 10, }, }, ], }, }, }); console.log(response);
GET movies/_search { "retriever": { "rrf": { "retrievers": [ { "standard": { "query": { "sparse_vector": { "field": "plot_embedding", "inference_id": "my-elser-model", "query": "films that explore psychological depths" } } } }, { "standard": { "query": { "multi_match": { "query": "crime", "fields": [ "plot", "title" ] } } } }, { "knn": { "field": "vector", "query_vector": [10, 22, 77], "k": 10, "num_candidates": 10 } } ] } } }
Text Similarity Re-ranker Retriever
editThe text_similarity_reranker
retriever uses an NLP model to improve search results by reordering the top-k documents based on their semantic similarity to the query.
Refer to Semantic re-ranking for a high level overview of semantic re-ranking.
Prerequisites
editTo use text_similarity_reranker
you must first set up a rerank
task using the Create inference API.
The rerank
task should be set up with a machine learning model that can compute text similarity. Refer to the Elastic NLP model reference for a list of third-party text similarity models supported by Elasticsearch.
Currently you can:
-
Integrate directly with the Cohere Rerank inference endpoint using the
rerank
task type -
Integrate directly with the Google Vertex AI inference endpoint using the
rerank
task type -
Upload a model to Elasticsearch with Eland using the
text_similarity
NLP task type.-
Then set up an Elasticsearch service inference endpoint with the
rerank
task type - Refer to the example on this page for a step-by-step guide.
-
Then set up an Elasticsearch service inference endpoint with the
Parameters
edit-
retriever
-
(Required, retriever)
The child retriever that generates the initial set of top documents to be re-ranked.
-
field
-
(Required,
string
)The document field to be used for text similarity comparisons. This field should contain the text that will be evaluated against the
inferenceText
. -
inference_id
-
(Required,
string
)Unique identifier of the inference endpoint created using the inference API.
-
inference_text
-
(Required,
string
)The text snippet used as the basis for similarity comparison.
-
rank_window_size
-
(Optional,
int
)The number of top documents to consider in the re-ranking process. Defaults to
10
. -
min_score
-
(Optional,
float
)Sets a minimum threshold score for including documents in the re-ranked results. Documents with similarity scores below this threshold will be excluded. Note that score calculations vary depending on the model used.
-
filter
-
(Optional, query object or list of query objects)
Applies the specified boolean query filter to the child retriever. If the child retriever already specifies any filters, then this top-level filter is applied in conjuction with the filter defined in the child retriever.
Example: Cohere Rerank
editThis example enables out-of-the-box semantic search by re-ranking top documents using the Cohere Rerank API. This approach eliminate the need to generate and store embeddings for all indexed documents.
This requires a Cohere Rerank inference endpoint using the rerank
task type.
resp = client.search( index="index", retriever={ "text_similarity_reranker": { "retriever": { "standard": { "query": { "match_phrase": { "text": "landmark in Paris" } } } }, "field": "text", "inference_id": "my-cohere-rerank-model", "inference_text": "Most famous landmark in Paris", "rank_window_size": 100, "min_score": 0.5 } }, ) print(resp)
const response = await client.search({ index: "index", retriever: { text_similarity_reranker: { retriever: { standard: { query: { match_phrase: { text: "landmark in Paris", }, }, }, }, field: "text", inference_id: "my-cohere-rerank-model", inference_text: "Most famous landmark in Paris", rank_window_size: 100, min_score: 0.5, }, }, }); console.log(response);
GET /index/_search { "retriever": { "text_similarity_reranker": { "retriever": { "standard": { "query": { "match_phrase": { "text": "landmark in Paris" } } } }, "field": "text", "inference_id": "my-cohere-rerank-model", "inference_text": "Most famous landmark in Paris", "rank_window_size": 100, "min_score": 0.5 } } }
Example: Semantic re-ranking with a Hugging Face model
editThe following example uses the cross-encoder/ms-marco-MiniLM-L-6-v2
model from Hugging Face to rerank search results based on semantic similarity.
The model must be uploaded to Elasticsearch using Eland.
Refer to the Elastic NLP model reference for a list of third party text similarity models supported by Elasticsearch.
Follow these steps to load the model and create a semantic re-ranker.
-
Install Eland using
pip
python -m pip install eland[pytorch]
-
Upload the model to Elasticsearch using Eland. This example assumes you have an Elastic Cloud deployment and an API key. Refer to the Eland documentation for more authentication options.
eland_import_hub_model \ --cloud-id $CLOUD_ID \ --es-api-key $ES_API_KEY \ --hub-model-id cross-encoder/ms-marco-MiniLM-L-6-v2 \ --task-type text_similarity \ --clear-previous \ --start
-
Create an inference endpoint for the
rerank
taskresp = client.inference.put( task_type="rerank", inference_id="my-msmarco-minilm-model", inference_config={ "service": "elasticsearch", "service_settings": { "num_allocations": 1, "num_threads": 1, "model_id": "cross-encoder__ms-marco-minilm-l-6-v2" } }, ) print(resp)
const response = await client.inference.put({ task_type: "rerank", inference_id: "my-msmarco-minilm-model", inference_config: { service: "elasticsearch", service_settings: { num_allocations: 1, num_threads: 1, model_id: "cross-encoder__ms-marco-minilm-l-6-v2", }, }, }); console.log(response);
PUT _inference/rerank/my-msmarco-minilm-model { "service": "elasticsearch", "service_settings": { "num_allocations": 1, "num_threads": 1, "model_id": "cross-encoder__ms-marco-minilm-l-6-v2" } }
-
Define a
text_similarity_rerank
retriever.resp = client.search( index="movies", retriever={ "text_similarity_reranker": { "retriever": { "standard": { "query": { "match": { "genre": "drama" } } } }, "field": "plot", "inference_id": "my-msmarco-minilm-model", "inference_text": "films that explore psychological depths" } }, ) print(resp)
const response = await client.search({ index: "movies", retriever: { text_similarity_reranker: { retriever: { standard: { query: { match: { genre: "drama", }, }, }, }, field: "plot", inference_id: "my-msmarco-minilm-model", inference_text: "films that explore psychological depths", }, }, }); console.log(response);
POST movies/_search { "retriever": { "text_similarity_reranker": { "retriever": { "standard": { "query": { "match": { "genre": "drama" } } } }, "field": "plot", "inference_id": "my-msmarco-minilm-model", "inference_text": "films that explore psychological depths" } } }
This retriever uses a standard
match
query to search themovie
index for films tagged with the genre "drama". It then re-ranks the results based on semantic similarity to the text in theinference_text
parameter, using the model we uploaded to Elasticsearch.
Using from
and size
with a retriever tree
editThe from
and size
parameters are provided globally as part of the general
search API. They are applied to all retrievers in a
retriever tree unless a specific retriever overrides the size
parameter
using a different parameter such as rank_window_size
. Though, the final
search hits are always limited to size
.
Using aggregations with a retriever tree
editAggregations are globally specified as part of a search request.
The query used for an aggregation is the combination of all leaf retrievers as should
clauses in a boolean query.
Restrictions on search parameters when specifying a retriever
editWhen a retriever is specified as part of a search the following elements are not allowed at the top-level and instead are only allowed as elements of specific retrievers: