Dense vector field type
editDense vector field type
editThe dense_vector
field type stores dense vectors of numeric values. Dense
vector fields are primarily used for k-nearest neighbor (kNN) search.
The dense_vector
type does not support aggregations or sorting.
You add a dense_vector
field as an array of numeric values
based on element_type
with
float
by default:
resp = client.indices.create( index="my-index", mappings={ "properties": { "my_vector": { "type": "dense_vector", "dims": 3 }, "my_text": { "type": "keyword" } } }, ) print(resp) resp1 = client.index( index="my-index", id="1", document={ "my_text": "text1", "my_vector": [ 0.5, 10, 6 ] }, ) print(resp1) resp2 = client.index( index="my-index", id="2", document={ "my_text": "text2", "my_vector": [ -0.5, 10, 10 ] }, ) print(resp2)
response = client.indices.create( index: 'my-index', body: { mappings: { properties: { my_vector: { type: 'dense_vector', dims: 3 }, my_text: { type: 'keyword' } } } } ) puts response response = client.index( index: 'my-index', id: 1, body: { my_text: 'text1', my_vector: [ 0.5, 10, 6 ] } ) puts response response = client.index( index: 'my-index', id: 2, body: { my_text: 'text2', my_vector: [ -0.5, 10, 10 ] } ) puts response
const response = await client.indices.create({ index: "my-index", mappings: { properties: { my_vector: { type: "dense_vector", dims: 3, }, my_text: { type: "keyword", }, }, }, }); console.log(response); const response1 = await client.index({ index: "my-index", id: 1, document: { my_text: "text1", my_vector: [0.5, 10, 6], }, }); console.log(response1); const response2 = await client.index({ index: "my-index", id: 2, document: { my_text: "text2", my_vector: [-0.5, 10, 10], }, }); console.log(response2);
PUT my-index { "mappings": { "properties": { "my_vector": { "type": "dense_vector", "dims": 3 }, "my_text" : { "type" : "keyword" } } } } PUT my-index/_doc/1 { "my_text" : "text1", "my_vector" : [0.5, 10, 6] } PUT my-index/_doc/2 { "my_text" : "text2", "my_vector" : [-0.5, 10, 10] }
Unlike most other data types, dense vectors are always single-valued.
It is not possible to store multiple values in one dense_vector
field.
Index vectors for kNN search
editA k-nearest neighbor (kNN) search finds the k nearest vectors to a query vector, as measured by a similarity metric.
Dense vector fields can be used to rank documents in
script_score
queries. This lets you perform
a brute-force kNN search by scanning all documents and ranking them by
similarity.
In many cases, a brute-force kNN search is not efficient enough. For this
reason, the dense_vector
type supports indexing vectors into a specialized
data structure to support fast kNN retrieval through the knn
option in the search API
Unmapped array fields of float elements with size between 128 and 4096 are dynamically mapped as dense_vector
with a default similariy of cosine
.
You can override the default similarity by explicitly mapping the field as dense_vector
with the desired similarity.
Indexing is enabled by default for dense vector fields and indexed as int8_hnsw
.
When indexing is enabled, you can define the vector similarity to use in kNN search:
resp = client.indices.create( index="my-index-2", mappings={ "properties": { "my_vector": { "type": "dense_vector", "dims": 3, "similarity": "dot_product" } } }, ) print(resp)
response = client.indices.create( index: 'my-index-2', body: { mappings: { properties: { my_vector: { type: 'dense_vector', dims: 3, similarity: 'dot_product' } } } } ) puts response
const response = await client.indices.create({ index: "my-index-2", mappings: { properties: { my_vector: { type: "dense_vector", dims: 3, similarity: "dot_product", }, }, }, }); console.log(response);
PUT my-index-2 { "mappings": { "properties": { "my_vector": { "type": "dense_vector", "dims": 3, "similarity": "dot_product" } } } }
Indexing vectors for approximate kNN search is an expensive process. It
can take substantial time to ingest documents that contain vector fields with
index
enabled. See k-nearest neighbor (kNN) search to
learn more about the memory requirements.
You can disable indexing by setting the index
parameter to false
:
resp = client.indices.create( index="my-index-2", mappings={ "properties": { "my_vector": { "type": "dense_vector", "dims": 3, "index": False } } }, ) print(resp)
response = client.indices.create( index: 'my-index-2', body: { mappings: { properties: { my_vector: { type: 'dense_vector', dims: 3, index: false } } } } ) puts response
const response = await client.indices.create({ index: "my-index-2", mappings: { properties: { my_vector: { type: "dense_vector", dims: 3, index: false, }, }, }, }); console.log(response);
PUT my-index-2 { "mappings": { "properties": { "my_vector": { "type": "dense_vector", "dims": 3, "index": false } } } }
Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved speed.
Automatically quantize vectors for kNN search
editThe dense_vector
type supports quantization to reduce the memory footprint required when searching float
vectors.
The two following quantization strategies are supported:
+
int8
- Quantizes each dimension of the vector to 1-byte integers. This can reduce the memory footprint by 75% at the cost of some accuracy.
int4
- Quantizes each dimension of the vector to half-byte integers. This can reduce the memory footprint by 87% at the cost of some accuracy.
To use a quantized index, you can set your index type to int8_hnsw
or int4_hnsw
. When indexing float
vectors, the current default
index type is int8_hnsw
.
Quantization will continue to keep the raw float vector values on disk for reranking, reindexing, and quantization improvements over the lifetime of the data.
This means disk usage will increase by ~25% for int8
and ~12.5% for int4
due to the overhead of storing the quantized and raw vectors.
int4
quantization requires an even number of vector dimensions.
Here is an example of how to create a byte-quantized index:
resp = client.indices.create( index="my-byte-quantized-index", mappings={ "properties": { "my_vector": { "type": "dense_vector", "dims": 3, "index": True, "index_options": { "type": "int8_hnsw" } } } }, ) print(resp)
response = client.indices.create( index: 'my-byte-quantized-index', body: { mappings: { properties: { my_vector: { type: 'dense_vector', dims: 3, index: true, index_options: { type: 'int8_hnsw' } } } } } ) puts response
const response = await client.indices.create({ index: "my-byte-quantized-index", mappings: { properties: { my_vector: { type: "dense_vector", dims: 3, index: true, index_options: { type: "int8_hnsw", }, }, }, }, }); console.log(response);
PUT my-byte-quantized-index { "mappings": { "properties": { "my_vector": { "type": "dense_vector", "dims": 3, "index": true, "index_options": { "type": "int8_hnsw" } } } } }
Here is an example of how to create a half-byte-quantized index:
resp = client.indices.create( index="my-byte-quantized-index", mappings={ "properties": { "my_vector": { "type": "dense_vector", "dims": 4, "index": True, "index_options": { "type": "int4_hnsw" } } } }, ) print(resp)
const response = await client.indices.create({ index: "my-byte-quantized-index", mappings: { properties: { my_vector: { type: "dense_vector", dims: 4, index: true, index_options: { type: "int4_hnsw", }, }, }, }, }); console.log(response);
PUT my-byte-quantized-index { "mappings": { "properties": { "my_vector": { "type": "dense_vector", "dims": 4, "index": true, "index_options": { "type": "int4_hnsw" } } } } }
Parameters for dense vector fields
editThe following mapping parameters are accepted:
-
element_type
-
(Optional, string)
The data type used to encode vectors. The supported data types are
float
(default),byte
, and bit.
Valid values for element_type
-
float
- indexes a 4-byte floating-point value per dimension. This is the default value.
-
byte
- indexes a 1-byte integer value per dimension.
-
bit
-
indexes a single bit per dimension. Useful for very high-dimensional vectors or models that specifically support bit vectors.
NOTE: when using
bit
, the number of dimensions must be a multiple of 8 and must represent the number of bits.
-
dims
-
(Optional, integer)
Number of vector dimensions. Can’t exceed
4096
. Ifdims
is not specified, it will be set to the length of the first vector added to the field. -
index
-
(Optional, Boolean)
If
true
, you can search this field using the kNN search API. Defaults totrue
.
-
similarity
-
(Optional*, string)
The vector similarity metric to use in kNN search. Documents are ranked by
their vector field’s similarity to the query vector. The
_score
of each document will be derived from the similarity, in a way that ensures scores are positive and that a larger score corresponds to a higher ranking. Defaults tol2_norm
whenelement_type: bit
otherwise defaults tocosine
.
bit
vectors only support l2_norm
as their similarity metric.
+
* This parameter can only be specified when index
is true
.
+
.Valid values for similarity
Details
-
l2_norm
-
Computes similarity based on the L2 distance (also known as Euclidean
distance) between the vectors. The document
_score
is computed as1 / (1 + l2_norm(query, vector)^2)
.
For bit
vectors, instead of using l2_norm
, the hamming
distance between the vectors is used. The _score
transformation is (numBits - hamming(a, b)) / numBits
-
dot_product
-
Computes the dot product of two unit vectors. This option provides an optimized way to perform cosine similarity. The constraints and computed score are defined by
element_type
.When
element_type
isfloat
, all vectors must be unit length, including both document and query vectors. The document_score
is computed as(1 + dot_product(query, vector)) / 2
.When
element_type
isbyte
, all vectors must have the same length including both document and query vectors or results will be inaccurate. The document_score
is computed as0.5 + (dot_product(query, vector) / (32768 * dims))
wheredims
is the number of dimensions per vector. -
cosine
-
Computes the cosine similarity. During indexing Elasticsearch automatically
normalizes vectors with
cosine
similarity to unit length. This allows to internally usedot_product
for computing similarity, which is more efficient. Original un-normalized vectors can be still accessed through scripts. The document_score
is computed as(1 + cosine(query, vector)) / 2
. Thecosine
similarity does not allow vectors with zero magnitude, since cosine is not defined in this case. -
max_inner_product
-
Computes the maximum inner product of two vectors. This is similar to
dot_product
, but doesn’t require vectors to be normalized. This means that each vector’s magnitude can significantly effect the score. The document_score
is adjusted to prevent negative values. Formax_inner_product
values< 0
, the_score
is1 / (1 + -1 * max_inner_product(query, vector))
. For non-negativemax_inner_product
results the_score
is calculatedmax_inner_product(query, vector) + 1
.
Although they are conceptually related, the similarity
parameter is
different from text
field similarity
and accepts
a distinct set of options.
-
index_options
-
(Optional*, object) An optional section that configures the kNN indexing algorithm. The HNSW algorithm has two internal parameters that influence how the data structure is built. These can be adjusted to improve the accuracy of results, at the expense of slower indexing speed.
* This parameter can only be specified when
index
istrue
.Properties of
index_options
-
type
-
(Required, string) The type of kNN algorithm to use. Can be either any of:
-
hnsw
- This utilizes the HNSW algorithm for scalable approximate kNN search. This supports allelement_type
values. -
int8_hnsw
- The default index type for float vectors. This utilizes the HNSW algorithm in addition to automatically scalar quantization for scalable approximate kNN search withelement_type
offloat
. This can reduce the memory footprint by 4x at the cost of some accuracy. See Automatically quantize vectors for kNN search. -
int4_hnsw
- This utilizes the HNSW algorithm in addition to automatically scalar quantization for scalable approximate kNN search withelement_type
offloat
. This can reduce the memory footprint by 8x at the cost of some accuracy. See Automatically quantize vectors for kNN search. -
flat
- This utilizes a brute-force search algorithm for exact kNN search. This supports allelement_type
values. -
int8_flat
- This utilizes a brute-force search algorithm in addition to automatically scalar quantization. Only supportselement_type
offloat
. -
int4_flat
- This utilizes a brute-force search algorithm in addition to automatically half-byte scalar quantization. Only supportselement_type
offloat
.
-
-
m
-
(Optional, integer)
The number of neighbors each node will be connected to in the HNSW graph.
Defaults to
16
. Only applicable tohnsw
,int8_hnsw
, andint4_hnsw
index types. -
ef_construction
-
(Optional, integer)
The number of candidates to track while assembling the list of nearest
neighbors for each new node. Defaults to
100
. Only applicable tohnsw
,int8_hnsw
, andint4_hnsw
index types. -
confidence_interval
-
(Optional, float)
Only applicable to
int8_hnsw
,int4_hnsw
,int8_flat
, andint4_flat
index types. The confidence interval to use when quantizing the vectors. Can be any value between and including0.90
and1.0
or exactly0
. When the value is0
, this indicates that dynamic quantiles should be calculated for optimized quantization. When between0.90
and1.0
, this value restricts the values used when calculating the quantization thresholds. For example, a value of0.95
will only use the middle 95% of the values when calculating the quantization thresholds (e.g. the highest and lowest 2.5% of values will be ignored). Defaults to1/(dims + 1)
forint8
quantized vectors and0
forint4
for dynamic quantile calculation.
-
Synthetic _source
editSynthetic _source
is Generally Available only for TSDB indices
(indices that have index.mode
set to time_series
). For other indices
synthetic _source
is in technical preview. Features in technical preview may
be changed or removed in a future release. Elastic will work to fix
any issues, but features in technical preview are not subject to the support SLA
of official GA features.
dense_vector
fields support synthetic _source
.
Indexing & Searching bit vectors
editWhen using element_type: bit
, this will treat all vectors as bit vectors. Bit vectors utilize only a single
bit per dimension and are internally encoded as bytes. This can be useful for very high-dimensional vectors or models.
When using bit
, the number of dimensions must be a multiple of 8 and must represent the number of bits. Additionally,
with bit
vectors, the typical vector similarity values are effectively all scored the same, e.g. with hamming
distance.
Let’s compare two byte[]
arrays, each representing 40 individual bits.
[-127, 0, 1, 42, 127]
in bits 1000000100000000000000010010101001111111
[127, -127, 0, 1, 42]
in bits 0111111110000001000000000000000100101010
When comparing these two bit, vectors, we first take the hamming
distance.
xor
result:
1000000100000000000000010010101001111111 ^ 0111111110000001000000000000000100101010 = 1111111010000001000000010010101101010101
Then, we gather the count of 1
bits in the xor
result: 18
. To scale for scoring, we subtract from the total number
of bits and divide by the total number of bits: (40 - 18) / 40 = 0.55
. This would be the _score
betwee these two
vectors.
Here is an example of indexing and searching bit vectors:
resp = client.indices.create( index="my-bit-vectors", mappings={ "properties": { "my_vector": { "type": "dense_vector", "dims": 40, "element_type": "bit" } } }, ) print(resp)
const response = await client.indices.create({ index: "my-bit-vectors", mappings: { properties: { my_vector: { type: "dense_vector", dims: 40, element_type: "bit", }, }, }, }); console.log(response);
PUT my-bit-vectors { "mappings": { "properties": { "my_vector": { "type": "dense_vector", "dims": 40, "element_type": "bit" } } } }
resp = client.bulk( index="my-bit-vectors", refresh=True, operations=[ { "index": { "_id": "1" } }, { "my_vector": [ 127, -127, 0, 1, 42 ] }, { "index": { "_id": "2" } }, { "my_vector": "8100012a7f" } ], ) print(resp)
const response = await client.bulk({ index: "my-bit-vectors", refresh: "true", operations: [ { index: { _id: "1", }, }, { my_vector: [127, -127, 0, 1, 42], }, { index: { _id: "2", }, }, { my_vector: "8100012a7f", }, ], }); console.log(response);
POST /my-bit-vectors/_bulk?refresh {"index": {"_id" : "1"}} {"my_vector": [127, -127, 0, 1, 42]} {"index": {"_id" : "2"}} {"my_vector": "8100012a7f"}
5 bytes representing the 40 bit dimensioned vector |
|
A hexidecimal string representing the 40 bit dimensioned vector |
Then, when searching, you can use the knn
query to search for similar bit vectors:
resp = client.search( index="my-bit-vectors", filter_path="hits.hits", query={ "knn": { "query_vector": [ 127, -127, 0, 1, 42 ], "field": "my_vector" } }, ) print(resp)
const response = await client.search({ index: "my-bit-vectors", filter_path: "hits.hits", query: { knn: { query_vector: [127, -127, 0, 1, 42], field: "my_vector", }, }, }); console.log(response);
POST /my-bit-vectors/_search?filter_path=hits.hits { "query": { "knn": { "query_vector": [127, -127, 0, 1, 42], "field": "my_vector" } } }
{ "hits": { "hits": [ { "_index": "my-bit-vectors", "_id": "1", "_score": 1.0, "_source": { "my_vector": [ 127, -127, 0, 1, 42 ] } }, { "_index": "my-bit-vectors", "_id": "2", "_score": 0.55, "_source": { "my_vector": "8100012a7f" } } ] } }