- Elasticsearch Guide: other versions:
- What is Elasticsearch?
- What’s new in 8.0
- Quick start
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Monitoring settings
- Node
- Networking
- Node query cache settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Advanced configuration
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Registered domain
- Remove
- Rename
- Script
- Set
- Set security user
- Sort
- Split
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Aliases
- Search your data
- Collapse search results
- Filter search results
- Highlighting
- Long-running searches
- Near real-time search
- Paginate search results
- Retrieve inner hits
- Retrieve selected fields
- Search across clusters
- Search multiple data streams and indices
- Search shard routing
- Search templates
- Sort search results
- kNN search
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Geo-distance
- Geohash grid
- Geotile grid
- Global
- Histogram
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- EQL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Overview
- Concepts
- Automate rollover
- Tutorial: Customize built-in policies
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled
- Configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- How to
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Nodes reload secure settings
- Nodes stats
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Cross-cluster replication APIs
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- Features APIs
- Fleet APIs
- Find structure API
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Ingest APIs
- Info API
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model deployment
- Start trained model deployment
- Stop trained model deployment
- Migration APIs
- Node lifecycle APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Script APIs
- Search APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get service accounts
- Get service account credentials
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Migration guide
- Release notes
- Dependencies and versions
Search your data
editSearch your data
editA search query, or query, is a request for information about data in Elasticsearch data streams or indices.
You can think of a query as a question, written in a way Elasticsearch understands. Depending on your data, you can use a query to get answers to questions like:
- What processes on my server take longer than 500 milliseconds to respond?
-
What users on my network ran
regsvr32.exe
within the last week? - What pages on my website contain a specific word or phrase?
A search consists of one or more queries that are combined and sent to Elasticsearch. Documents that match a search’s queries are returned in the hits, or search results, of the response.
A search may also contain additional information used to better process its queries. For example, a search may be limited to a specific index or only return a specific number of results.
Run a search
editYou can use the search API to search and
aggregate data stored in Elasticsearch data streams or indices.
The API’s query
request body parameter accepts queries written in
Query DSL.
The following request searches my-index-000001
using a
match
query. This query matches documents with a
user.id
value of kimchy
.
GET /my-index-000001/_search { "query": { "match": { "user.id": "kimchy" } } }
The API response returns the top 10 documents matching the query in the
hits.hits
property.
{ "took": 5, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped": 0, "failed": 0 }, "hits": { "total": { "value": 1, "relation": "eq" }, "max_score": 1.3862942, "hits": [ { "_index": "my-index-000001", "_id": "kxWFcnMByiguvud1Z8vC", "_score": 1.3862942, "_source": { "@timestamp": "2099-11-15T14:12:12", "http": { "request": { "method": "get" }, "response": { "bytes": 1070000, "status_code": 200 }, "version": "1.1" }, "message": "GET /search HTTP/1.1 200 1070000", "source": { "ip": "127.0.0.1" }, "user": { "id": "kimchy" } } } ] } }
Define fields that exist only in a query
editInstead of indexing your data and then searching it, you can define
runtime fields that only exist as part of your
search query. You specify a runtime_mappings
section in your search request
to define the runtime field, which can optionally include a Painless script.
For example, the following query defines a runtime field called day_of_week
.
The included script calculates the day of the week based on the value of the
@timestamp
field, and uses emit
to return the calculated value.
The query also includes a terms aggregation that operates on day_of_week
.
GET /my-index-000001/_search { "runtime_mappings": { "day_of_week": { "type": "keyword", "script": { "source": """emit(doc['@timestamp'].value.dayOfWeekEnum .getDisplayName(TextStyle.FULL, Locale.ROOT))""" } } }, "aggs": { "day_of_week": { "terms": { "field": "day_of_week" } } } }
The response includes an aggregation based on the day_of_week
runtime field.
Under buckets
is a key
with a value of Sunday
. The query dynamically
calculated this value based on the script defined in the day_of_week
runtime
field without ever indexing the field.
{ ... *** "aggregations" : { "day_of_week" : { "doc_count_error_upper_bound" : 0, "sum_other_doc_count" : 0, "buckets" : [ { "key" : "Sunday", "doc_count" : 5 } ] } } }
Common search options
editYou can use the following options to customize your searches.
Query DSL
Query DSL supports a variety of query types you can mix and match
to get the results you want. Query types include:
- Boolean and other compound queries, which let you combine queries and match results based on multiple criteria
- Term-level queries for filtering and finding exact matches
- Full text queries, which are commonly used in search engines
- Geo and spatial queries
Aggregations
You can use search aggregations to get statistics and
other analytics for your search results. Aggregations help you answer questions
like:
- What’s the average response time for my servers?
- What are the top IP addresses hit by users on my network?
- What is the total transaction revenue by customer?
Search multiple data streams and indices
You can use comma-separated values and grep-like index patterns to search
several data streams and indices in the same request. You can even boost search
results from specific indices. See Search multiple data streams and indices.
Paginate search results
By default, searches return only the top 10 matching hits. To retrieve
more or fewer documents, see Paginate search results.
Retrieve selected fields
The search response’s hit.hits
property includes the full document
_source
for each hit. To retrieve only a subset of
the _source
or other fields, see Retrieve selected fields.
Sort search results
By default, search hits are sorted by _score
, a relevance
score that measures how well each document matches the query. To customize the
calculation of these scores, use the
script_score
query. To sort search hits by
other field values, see Sort search results.
Run an async search
Elasticsearch searches are designed to run on large volumes of data quickly, often
returning results in milliseconds. For this reason, searches are
synchronous by default. The search request waits for complete results before
returning a response.
However, complete results can take longer for searches across large data sets or multiple clusters.
To avoid long waits, you can run an asynchronous, or async, search instead. An async search lets you retrieve partial results for a long-running search now and get complete results later.
Search timeout
editBy default, search requests don’t time out. The request waits for complete results from each shard before returning a response.
While async search is designed for long-running
searches, you can also use the timeout
parameter to specify a duration you’d
like to wait on each shard to complete. Each shard collects hits within the
specified time period. If collection isn’t finished when the period ends, Elasticsearch
uses only the hits accumulated up to that point. The overall latency of a search
request depends on the number of shards needed for the search and the number of
concurrent shard requests.
GET /my-index-000001/_search { "timeout": "2s", "query": { "match": { "user.id": "kimchy" } } }
To set a cluster-wide default timeout for all search requests, configure
search.default_search_timeout
using the cluster
settings API. This global timeout duration is used if no timeout
argument is
passed in the request. If the global search timeout expires before the search
request finishes, the request is cancelled using task
cancellation. The search.default_search_timeout
setting defaults to -1
(no
timeout).
Search cancellation
editYou can cancel a search request using the task management API. Elasticsearch also automatically cancels a search request when your client’s HTTP connection closes. We recommend you set up your client to close HTTP connections when a search request is aborted or times out.
Track total hits
editGenerally the total hit count can’t be computed accurately without visiting all
matches, which is costly for queries that match lots of documents. The
track_total_hits
parameter allows you to control how the total number of hits
should be tracked.
Given that it is often enough to have a lower bound of the number of hits,
such as "there are at least 10000 hits", the default is set to 10,000
.
This means that requests will count the total hit accurately up to 10,000
hits.
It is a good trade off to speed up searches if you don’t need the accurate number
of hits after a certain threshold.
When set to true
the search response will always track the number of hits that
match the query accurately (e.g. total.relation
will always be equal to "eq"
when track_total_hits
is set to true). Otherwise the "total.relation"
returned
in the "total"
object in the search response determines how the "total.value"
should be interpreted. A value of "gte"
means that the "total.value"
is a
lower bound of the total hits that match the query and a value of "eq"
indicates
that "total.value"
is the accurate count.
GET my-index-000001/_search { "track_total_hits": true, "query": { "match" : { "user.id" : "elkbee" } } }
... returns:
{ "_shards": ... "timed_out": false, "took": 100, "hits": { "max_score": 1.0, "total" : { "value": 2048, "relation": "eq" }, "hits": ... } }
It is also possible to set track_total_hits
to an integer.
For instance the following query will accurately track the total hit count that match
the query up to 100 documents:
GET my-index-000001/_search { "track_total_hits": 100, "query": { "match": { "user.id": "elkbee" } } }
The hits.total.relation
in the response will indicate if the
value returned in hits.total.value
is accurate ("eq"
) or a lower
bound of the total ("gte"
).
For instance the following response:
{ "_shards": ... "timed_out": false, "took": 30, "hits": { "max_score": 1.0, "total": { "value": 42, "relation": "eq" }, "hits": ... } }
... indicates that the number of hits returned in the total
is accurate.
If the total number of hits that match the query is greater than the
value set in track_total_hits
, the total hits in the response
will indicate that the returned value is a lower bound:
{ "_shards": ... "hits": { "max_score": 1.0, "total": { "value": 100, "relation": "gte" }, "hits": ... } }
If you don’t need to track the total number of hits at all you can improve query
times by setting this option to false
:
GET my-index-000001/_search { "track_total_hits": false, "query": { "match": { "user.id": "elkbee" } } }
... returns:
Finally you can force an accurate count by setting "track_total_hits"
to true
in the request.
Quickly check for matching docs
editIf you only want to know if there are any documents matching a
specific query, you can set the size
to 0
to indicate that we are not
interested in the search results. You can also set terminate_after
to 1
to indicate that the query execution can be terminated whenever the first
matching document was found (per shard).
GET /_search?q=user.id:elkbee&size=0&terminate_after=1
terminate_after
is always applied after the
post_filter
and stops the query as well as the aggregation
executions when enough hits have been collected on the shard. Though the doc
count on aggregations may not reflect the hits.total
in the response since
aggregations are applied before the post filtering.
The response will not contain any hits as the size
was set to 0
. The
hits.total
will be either equal to 0
, indicating that there were no
matching documents, or greater than 0
meaning that there were at least
as many documents matching the query when it was early terminated.
Also if the query was terminated early, the terminated_early
flag will
be set to true
in the response.
{ "took": 3, "timed_out": false, "terminated_early": true, "_shards": { "total": 1, "successful": 1, "skipped" : 0, "failed": 0 }, "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": null, "hits": [] } }
The took
time in the response contains the milliseconds that this request
took for processing, beginning quickly after the node received the query, up
until all search related work is done and before the above JSON is returned
to the client. This means it includes the time spent waiting in thread pools,
executing a distributed search across the whole cluster and gathering all the
results.
On this page