- Elasticsearch Guide: other versions:
- Elasticsearch introduction
- Getting started with Elasticsearch
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Full-cluster restart and rolling restart
- Set up X-Pack
- Configuring X-Pack Java Clients
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Median Absolute Deviation Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Auto-interval Date Histogram Aggregation
- Children Aggregation
- Composite Aggregation
- Date histogram aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- GeoTile Grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Parent Aggregation
- Range Aggregation
- Rare Terms Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Subtleties of bucketing range fields
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Cumulative Cardinality Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Query DSL
- Search across clusters
- Scripting
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Char Group Tokenizer
- Classic Tokenizer
- Edge n-gram tokenizer
- Keyword Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- N-gram tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Pattern Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Standard Tokenizer
- Thai Tokenizer
- UAX URL Email Tokenizer
- Whitespace Tokenizer
- Token Filters
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten Graph Token Filter
- Hunspell Token Filter
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Length Token Filter
- Limit Token Count Token Filter
- Lowercase Token Filter
- MinHash Token Filter
- Multiplexer Token Filter
- N-gram
- Normalization Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Phonetic Token Filter
- Porter Stem Token Filter
- Predicate Token Filter Script
- Remove Duplicates Token Filter
- Reverse Token Filter
- Shingle Token Filter
- Snowball Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Stop Token Filter
- Synonym Token Filter
- Synonym Graph Token Filter
- Trim Token Filter
- Truncate Token Filter
- Unique Token Filter
- Uppercase Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Character Filters
- Modules
- Index modules
- Ingest node
- Pipeline Definition
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Circle Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Dot Expander Processor
- Drop Processor
- Fail Processor
- Foreach Processor
- GeoIP Processor
- Grok Processor
- Gsub Processor
- HTML Strip Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- User Agent processor
- Managing the index lifecycle
- Getting started with index lifecycle management
- Policy phases and actions
- Set up index lifecycle management policy
- Using policies to manage index rollover
- Update policy
- Index lifecycle error handling
- Restoring snapshots of managed indices
- Start and stop index lifecycle management
- Using ILM with existing indices
- Getting started with snapshot lifecycle management
- SQL access
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Monitor a cluster
- Frozen indices
- Roll up or transform your data
- Set up a cluster for high availability
- Secure a cluster
- Overview
- Configuring security
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Security privileges
- Document level security
- Field level security
- Granting privileges for indices and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enabling audit logging
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on cluster and index events
- Command line tools
- How To
- Testing
- Glossary of terms
- REST APIs
- API conventions
- cat APIs
- Cluster APIs
- Cross-cluster replication APIs
- Document APIs
- Explore API
- Index APIs
- Add index alias
- Analyze
- Clear cache
- Clone index
- Close index
- Create index
- Delete index
- Delete index alias
- Delete index template
- Flush
- Force merge
- Freeze index
- Get field mapping
- Get index
- Get index alias
- Get index settings
- Get index template
- Get mapping
- Index alias exists
- Index exists
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists
- Open index
- Put index template
- Put mapping
- Refresh
- Rollover index
- Shrink index
- Split index
- Synced flush
- Type exists
- Unfreeze index
- Update index alias
- Update index settings
- Index lifecycle management API
- Ingest APIs
- Info API
- Licensing APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendar
- Create datafeeds
- Create filter
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Find file structure
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Machine learning data frame analytics APIs
- Migration APIs
- Reload search analyzers
- Rollup APIs
- Search APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect Prepare Authentication API
- OpenID Connect authenticate API
- OpenID Connect logout API
- SSL certificate
- Snapshot lifecycle management API
- Transform APIs
- Watcher APIs
- Definitions
- Release highlights
- Breaking changes
- Release notes
- Elasticsearch version 7.4.2
- Elasticsearch version 7.4.1
- Elasticsearch version 7.4.0
- Elasticsearch version 7.3.2
- Elasticsearch version 7.3.1
- Elasticsearch version 7.3.0
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1
Rare Terms Aggregation
editRare Terms Aggregation
editA multi-bucket value source based aggregation which finds "rare" terms — terms that are at the long-tail
of the distribution and are not frequent. Conceptually, this is like a terms
aggregation that is
sorted by _count
ascending. As noted in the terms aggregation docs,
actually ordering a terms
agg by count ascending has unbounded error. Instead, you should use the rare_terms
aggregation
Syntax
editA rare_terms
aggregation looks like this in isolation:
{ "rare_terms": { "field": "the_field", "max_doc_count": 1 } }
Table 6. rare_terms
Parameters
Parameter Name |
Description |
Required |
Default Value |
|
The field we wish to find rare terms in |
Required |
|
|
The maximum number of documents a term should appear in. |
Optional |
|
|
The precision of the internal CuckooFilters. Smaller precision leads to
better approximation, but higher memory usage. Cannot be smaller than |
Optional |
|
|
Terms that should be included in the aggregation |
Optional |
|
|
Terms that should be excluded from the aggregation |
Optional |
|
|
The value that should be used if a document does not have the field being aggregated |
Optional |
Example:
GET /_search { "aggs" : { "genres" : { "rare_terms" : { "field" : "genre" } } } }
Response:
{ ... "aggregations" : { "genres" : { "buckets" : [ { "key" : "swing", "doc_count" : 1 } ] } } }
In this example, the only bucket that we see is the "swing" bucket, because it is the only term that appears in
one document. If we increase the max_doc_count
to 2
, we’ll see some more buckets:
GET /_search { "aggs" : { "genres" : { "rare_terms" : { "field" : "genre", "max_doc_count": 2 } } } }
This now shows the "jazz" term which has a doc_count
of 2":
{ ... "aggregations" : { "genres" : { "buckets" : [ { "key" : "swing", "doc_count" : 1 }, { "key" : "jazz", "doc_count" : 2 } ] } } }
Maximum document count
editThe max_doc_count
parameter is used to control the upper bound of document counts that a term can have. There
is not a size limitation on the rare_terms
agg like terms
agg has. This means that terms
which match the max_doc_count
criteria will be returned. The aggregation functions in this manner to avoid
the order-by-ascending issues that afflict the terms
aggregation.
This does, however, mean that a large number of results can be returned if chosen incorrectly.
To limit the danger of this setting, the maximum max_doc_count
is 100.
Max Bucket Limit
editThe Rare Terms aggregation is more liable to trip the search.max_buckets
soft limit than other aggregations due
to how it works. The max_bucket
soft-limit is evaluated on a per-shard basis while the aggregation is collecting
results. It is possible for a term to be "rare" on a shard but become "not rare" once all the shard results are
merged together. This means that individual shards tend to collect more buckets than are truly rare, because
they only have their own local view. This list is ultimately pruned to the correct, smaller list of rare
terms on the coordinating node… but a shard may have already tripped the max_buckets
soft limit and aborted
the request.
When aggregating on fields that have potentially many "rare" terms, you may need to increase the max_buckets
soft
limit. Alternatively, you might need to find a way to filter the results to return fewer rare values (smaller time
span, filter by category, etc), or re-evaluate your definition of "rare" (e.g. if something
appears 100,000 times, is it truly "rare"?)
Document counts are approximate
editThe naive way to determine the "rare" terms in a dataset is to place all the values in a map, incrementing counts
as each document is visited, then return the bottom n
rows. This does not scale beyond even modestly sized data
sets. A sharded approach where only the "top n" values are retained from each shard (ala the terms
aggregation)
fails because the long-tail nature of the problem means it is impossible to find the "top n" bottom values without
simply collecting all the values from all shards.
Instead, the Rare Terms aggregation uses a different approximate algorithm:
- Values are placed in a map the first time they are seen.
- Each addition occurrence of the term increments a counter in the map
-
If the counter > the
max_doc_count
threshold, the term is removed from the map and placed in a CuckooFilter - The CuckooFilter is consulted on each term. If the value is inside the filter, it is known to be above the threshold already and skipped.
After execution, the map of values is the map of "rare" terms under the max_doc_count
threshold. This map and CuckooFilter
are then merged with all other shards. If there are terms that are greater than the threshold (or appear in
a different shard’s CuckooFilter) the term is removed from the merged list. The final map of values is returned
to the user as the "rare" terms.
CuckooFilters have the possibility of returning false positives (they can say a value exists in their collection when it actually does not). Since the CuckooFilter is being used to see if a term is over threshold, this means a false positive from the CuckooFilter will mistakenly say a value is common when it is not (and thus exclude it from it final list of buckets). Practically, this means the aggregations exhibits false-negative behavior since the filter is being used "in reverse" of how people generally think of approximate set membership sketches.
CuckooFilters are described in more detail in the paper:
Fan, Bin, et al. "Cuckoo filter: Practically better than bloom." Proceedings of the 10th ACM International on Conference on emerging Networking Experiments and Technologies. ACM, 2014.
Precision
editAlthough the internal CuckooFilter is approximate in nature, the false-negative rate can be controlled with a
precision
parameter. This allows the user to trade more runtime memory for more accurate results.
The default precision is 0.001
, and the smallest (e.g. most accurate and largest memory overhead) is 0.00001
.
Below are some charts which demonstrate how the accuracy of the aggregation is affected by precision and number
of distinct terms.
The X-axis shows the number of distinct values the aggregation has seen, and the Y-axis shows the percent error.
Each line series represents one "rarity" condition (ranging from one rare item to 100,000 rare items). For example,
the orange "10" line means ten of the values were "rare" (doc_count == 1
), out of 1-20m distinct values (where the
rest of the values had doc_count > 1
)
This first chart shows precision 0.01
:
And precision 0.001
(the default):
And finally precision 0.0001
:
The default precision of 0.001
maintains an accuracy of < 2.5% for the tested conditions, and accuracy slowly
degrades in a controlled, linear fashion as the number of distinct values increases.
The default precision of 0.001
has a memory profile of 1.748⁻⁶ * n
bytes, where n
is the number
of distinct values the aggregation has seen (it can also be roughly eyeballed, e.g. 20 million unique values is about
30mb of memory). The memory usage is linear to the number of distinct values regardless of which precision is chosen,
the precision only affects the slope of the memory profile as seen in this chart:
For comparison, an equivalent terms aggregation at 20 million buckets would be roughly
20m * 69b == ~1.38gb
(with 69 bytes being a very optimistic estimate of an empty bucket cost, far lower than what
the circuit breaker accounts for). So although the rare_terms
agg is relatively heavy, it is still orders of
magnitude smaller than the equivalent terms aggregation
Filtering Values
editIt is possible to filter the values for which buckets will be created. This can be done using the include
and
exclude
parameters which are based on regular expression strings or arrays of exact values. Additionally,
include
clauses can filter using partition
expressions.
Filtering Values with regular expressions
editGET /_search { "aggs" : { "genres" : { "rare_terms" : { "field" : "genre", "include" : "swi*", "exclude" : "electro*" } } } }
In the above example, buckets will be created for all the tags that starts with swi
, except those starting
with electro
(so the tag swing
will be aggregated but not electro_swing
). The include
regular expression will determine what
values are "allowed" to be aggregated, while the exclude
determines the values that should not be aggregated. When
both are defined, the exclude
has precedence, meaning, the include
is evaluated first and only then the exclude
.
The syntax is the same as regexp queries.
Filtering Values with exact values
editFor matching based on exact values the include
and exclude
parameters can simply take an array of
strings that represent the terms as they are found in the index:
GET /_search { "aggs" : { "genres" : { "rare_terms" : { "field" : "genre", "include" : ["swing", "rock"], "exclude" : ["jazz"] } } } }
Missing value
editThe missing
parameter defines how documents that are missing a value should be treated.
By default they will be ignored but it is also possible to treat them as if they
had a value.
Nested, RareTerms, and scoring sub-aggregations
editThe RareTerms aggregation has to operate in breadth_first
mode, since it needs to prune terms as doc count thresholds
are breached. This requirement means the RareTerms aggregation is incompatible with certain combinations of aggregations
that require depth_first
. In particular, scoring sub-aggregations that are inside a nested
force the entire aggregation tree to run
in depth_first
mode. This will throw an exception since RareTerms is unable to process depth_first
.
As a concrete example, if rare_terms
aggregation is the child of a nested
aggregation, and one of the child aggregations of rare_terms
needs document scores (like a top_hits
aggregation), this will throw an exception.
On this page