WARNING: Version 5.4 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Diversified Sampler Aggregation
editDiversified Sampler Aggregation
editThis functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
Like the sampler
aggregation this is a filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.
The diversified_sampler
aggregation adds the ability to limit the number of matches that share a common value such as an "author".
Any good market researcher will tell you that when working with samples of data it is important that the sample represents a healthy variety of opinions rather than being skewed by any single voice. The same is true with aggregations and sampling with these diversify settings can offer a way to remove the bias in your content (an over-populated geography, a large spike in a timeline or an over-active forum spammer).
Example use cases:
- Tightening the focus of analytics to high-relevance matches rather than the potentially very long tail of low-quality matches
- Removing bias from analytics by ensuring fair representation of content from different sources
-
Reducing the running cost of aggregations that can produce useful results using only samples e.g.
significant_terms
A choice of field
or script
setting is used to provide values used for de-duplication and the max_docs_per_value
setting controls the maximum
number of documents collected on any one shard which share a common value. The default setting for max_docs_per_value
is 1.
The aggregation will throw an error if the choice of field
or script
produces multiple values for a single document (de-duplication using multi-valued fields is not supported due to efficiency concerns).
Example:
We might want to see which tags are strongly associated with #elasticsearch
on StackOverflow
forum posts but ignoring the effects of some prolific users with a tendency to misspell #Kibana as #Cabana.
POST /stackoverflow/_search?size=0 { "query": { "query_string": { "query": "tags:elasticsearch" } }, "aggs": { "my_unbiased_sample": { "diversified_sampler": { "shard_size": 200, "field" : "author" }, "aggs": { "keywords": { "significant_terms": { "field": "tags", "exclude": ["elasticsearch"] } } } } } }
Response:
{ ... "aggregations": { "my_unbiased_sample": { "doc_count": 1000, "keywords": { "doc_count": 1000, "buckets": [ { "key": "kibana", "doc_count": 150, "score": 2.213, "bg_count": 200 } ] } } } }
1000 documents were sampled in total because we asked for a maximum of 200 from an index with 5 shards. The cost of performing the nested significant_terms aggregation was therefore limited rather than unbounded. |
|
The results of the significant_terms aggregation are not skewed by any single author’s quirks because we asked for a maximum of one post from any one author in our sample. |
Scripted example:
editIn this scenario we might want to diversify on a combination of field values. We can use a script
to produce a hash of the
multiple values in a tags field to ensure we don’t have a sample that consists of the same repeated combinations of tags.
POST /stackoverflow/_search?size=0 { "query": { "query_string": { "query": "tags:kibana" } }, "aggs": { "my_unbiased_sample": { "diversified_sampler": { "shard_size": 200, "max_docs_per_value" : 3, "script" : { "lang": "painless", "inline": "doc['tags'].values.hashCode()" } }, "aggs": { "keywords": { "significant_terms": { "field": "tags", "exclude": ["kibana"] } } } } } }
Response:
{ ... "aggregations": { "my_unbiased_sample": { "doc_count": 1000, "keywords": { "doc_count": 1000, "buckets": [ { "key": "logstash", "doc_count": 3, "score": 2.213, "bg_count": 50 }, { "key": "elasticsearch", "doc_count": 3, "score": 1.34, "bg_count": 200 }, ] } } } }
shard_size
editThe shard_size
parameter limits how many top-scoring documents are collected in the sample processed on each shard.
The default value is 100.
max_docs_per_value
editThe max_docs_per_value
is an optional parameter and limits how many documents are permitted per choice of de-duplicating value.
The default setting is "1".
execution_hint
editThe optional execution_hint
setting can influence the management of the values used for de-duplication.
Each option will hold up to shard_size
values in memory while performing de-duplication but the type of value held can be controlled as follows:
-
hold field values directly (
map
) -
hold ordinals of the field as determined by the Lucene index (
global_ordinals
) -
hold hashes of the field values - with potential for hash collisions (
bytes_hash
)
The default setting is to use global_ordinals
if this information is available from the Lucene index and reverting to map
if not.
The bytes_hash
setting may prove faster in some cases but introduces the possibility of false positives in de-duplication logic due to the possibility of hash collisions.
Please note that Elasticsearch will ignore the choice of execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.
Limitations
editCannot be nested under breadth_first
aggregations
editBeing a quality-based filter the diversified_sampler aggregation needs access to the relevance score produced for each document.
It therefore cannot be nested under a terms
aggregation which has the collect_mode
switched from the default depth_first
mode to breadth_first
as this discards scores.
In this situation an error will be thrown.
Limited de-dup logic.
editThe de-duplication logic applies only at a shard level so will not apply across shards.
No specialized syntax for geo/date fields
editCurrently the syntax for defining the diversifying values is defined by a choice of field
or
script
- there is no added syntactical sugar for expressing geo or date units such as "7d" (7
days). This support may be added in a later release and users will currently have to create these
sorts of values using a script.