Search your data

edit

A search query, or query, is a request for information about data in Elasticsearch indices.

You can think of a query as a question, written in a way Elasticsearch understands. Depending on your data, you can use a query to get answers to questions like:

  • What processes on my server take longer than 500 milliseconds to respond?
  • What users on my network ran regsvr32.exe within the last week?
  • What pages on my website contain a specific word or phrase?

A search consists of one or more queries that are combined and sent to Elasticsearch. Documents that match a search’s queries are returned in the hits, or search results, of the response.

A search may also contain additional information used to better process its queries. For example, a search may be limited to a specific index or only return a specific number of results.

Run a search

edit

You can use the search API to search data stored in one or more Elasticsearch indices.

The API can run two types of searches, depending on how you provide queries:

URI searches
Queries are provided through a query parameter. URI searches tend to be simpler and best suited for testing.
Request body searches
Queries are provided through the JSON body of the API request. These queries are written in Query DSL. We recommend using request body searches in most production use cases.

If you specify a query in both the URI and request body, the search API request runs only the URI query.

Run a URI search

edit

You can use the search API’s q query string parameter to run a search in the request’s URI. The q parameter only accepts queries written in Lucene’s query string syntax.

The following URI search matches documents with a user.id value of kimchy.

GET /my-index-000001/_search?q=user.id:kimchy

The API returns the following response.

By default, the hits.hits property returns the top 10 documents matching the query. To retrieve more documents, see Paginate search results.

The response sorts documents in hits.hits by _score, a relevance score that measures how well each document matches the query.

The hit.hits property also includes the _source for each matching document. To retrieve only a subset of the _source or other fields, see Retrieve selected fields.

{
  "took": 5,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 1,
      "relation": "eq"
    },
    "max_score": 1.3862942,
    "hits": [
      {
        "_index": "my-index-000001",
        "_type": "_doc",
        "_id": "kxWFcnMByiguvud1Z8vC",
        "_score": 1.3862942,
        "_source": {
          "@timestamp": "2099-11-15T14:12:12",
          "http": {
            "request": {
              "method": "get"
            },
            "response": {
              "bytes": 1070000,
              "status_code": 200
            },
            "version": "1.1"
          },
          "message": "GET /search HTTP/1.1 200 1070000",
          "source": {
            "ip": "127.0.0.1"
          },
          "user": {
            "id": "kimchy"
          }
        }
      }
    ]
  }
}

Run a request body search

edit

You can use the search API’s query request body parameter to provide a query as a JSON object, written in Query DSL.

The following request body search uses the match query to match documents with a user.id value of kimchy.

GET /my-index-000001/_search
{
  "query": {
    "match": {
      "user.id": "kimchy"
    }
  }
}

Search multiple indices

edit

To search multiple indices, add them as comma-separated values in the search API request path.

The following request searches the my-index-000001 and my-index-000002 indices.

GET /my-index-000001,my-index-000002/_search
{
  "query": {
    "match": {
      "user.id": "kimchy"
    }
  }
}

You can also search multiple indices using an index pattern.

The following request targets the wildcard pattern my-index-*. The request searches any indices in the cluster that start with my-index-.

GET /my-index-*/_search
{
  "query": {
    "match": {
      "user.id": "kimchy"
    }
  }
}

To search all indices in a cluster, omit the index name from the request path. Alternatively, you can use _all or * in place of the index name.

The following requests are equivalent and search all indices in the cluster.

GET /_search
{
  "query": {
    "match": {
      "user.id": "kimchy"
    }
  }
}

GET /_all/_search
{
  "query": {
    "match": {
      "user.id": "kimchy"
    }
  }
}

GET /*/_search
{
  "query": {
    "match": {
      "user.id": "kimchy"
    }
  }
}

Index boost

edit

When searching multiple indices, you can use the indices_boost parameter to boost results from one or more specified indices. This is useful when hits coming from one index matter more than hits coming from another index.

GET /_search
{
  "indices_boost": [
    { "my-index-000001": 1.4 },
    { "my-index-000002": 1.3 }
  ]
}

You can also specify it as an array to control the order of boosts.

GET /_search
{
  "indices_boost": [
    { "my-alias":  1.4 },
    { "my-index*": 1.3 }
  ]
}

This is important when you use aliases or wildcard expression. If multiple matches are found, the first match will be used. For example, if an index is included in both alias1 and index*, boost value of 1.4 is applied.

Preference

edit

You can use the preference parameter to control the shard copies on which a search runs. By default, Elasticsearch selects from the available shard copies in an unspecified order, taking the allocation awareness and adaptive replica selection configuration into account. However, it may sometimes be desirable to try and route certain searches to certain sets of shard copies.

A possible use case would be to make use of per-copy caches like the request cache. Doing this, however, runs contrary to the idea of search parallelization and can create hotspots on certain nodes because the load might not be evenly distributed anymore.

The preference is a query string parameter which can be set to:

_only_local

The operation will be executed only on shards allocated to the local node.

_local

The operation will be executed on shards allocated to the local node if possible, and will fall back to other shards if not.

_prefer_nodes:abc,xyz

The operation will be executed on nodes with one of the provided node ids (abc or xyz in this case) if possible. If suitable shard copies exist on more than one of the selected nodes then the order of preference between these copies is unspecified.

_shards:2,3

Restricts the operation to the specified shards. (2 and 3 in this case). This preference can be combined with other preferences but it has to appear first: _shards:2,3|_local

_only_nodes:abc*,x*yz,...

Restricts the operation to nodes specified according to the node specification. If suitable shard copies exist on more than one of the selected nodes then the order of preference between these copies is unspecified.

Custom (string) value

Any value that does not start with _. If two searches both give the same custom string value for their preference and the underlying cluster state does not change then the same ordering of shards will be used for the searches. This does not guarantee that the exact same shards will be used each time: the cluster state, and therefore the selected shards, may change for a number of reasons including shard relocations and shard failures, and nodes may sometimes reject searches causing fallbacks to alternative nodes. However, in practice the ordering of shards tends to remain stable for long periods of time. A good candidate for a custom preference value is something like the web session id or the user name.

For instance, use the user’s session ID xyzabc123 as follows:

GET /_search?preference=xyzabc123
{
  "query": {
    "match": {
      "title": "elasticsearch"
    }
  }
}

This can be an effective strategy to increase usage of e.g. the request cache for unique users running similar searches repeatedly by always hitting the same cache, while requests of different users are still spread across all shard copies.

The _only_local preference guarantees only to use shard copies on the local node, which is sometimes useful for troubleshooting. All other options do not fully guarantee that any particular shard copies are used in a search, and on a changing index this may mean that repeated searches may yield different results if they are executed on different shard copies which are in different refresh states.

Search type

edit

There are different execution paths that can be done when executing a distributed search. The distributed search operation needs to be scattered to all the relevant shards and then all the results are gathered back. When doing scatter/gather type execution, there are several ways to do that, specifically with search engines.

One of the questions when executing a distributed search is how many results to retrieve from each shard. For example, if we have 10 shards, the 1st shard might hold the most relevant results from 0 till 10, with other shards results ranking below it. For this reason, when executing a request, we will need to get results from 0 till 10 from all shards, sort them, and then return the results if we want to ensure correct results.

Another question, which relates to the search engine, is the fact that each shard stands on its own. When a query is executed on a specific shard, it does not take into account term frequencies and other search engine information from the other shards. If we want to support accurate ranking, we would need to first gather the term frequencies from all shards to calculate global term frequencies, then execute the query on each shard using these global frequencies.

Also, because of the need to sort the results, getting back a large document set, or even scrolling it, while maintaining the correct sorting behavior can be a very expensive operation. For large result set scrolling, it is best to sort by _doc if the order in which documents are returned is not important.

Elasticsearch is very flexible and allows to control the type of search to execute on a per search request basis. The type can be configured by setting the search_type parameter in the query string. The types are:

Query Then Fetch

edit

Parameter value: query_then_fetch.

The request is processed in two phases. In the first phase, the query is forwarded to all involved shards. Each shard executes the search and generates a sorted list of results, local to that shard. Each shard returns just enough information to the coordinating node to allow it to merge and re-sort the shard level results into a globally sorted set of results, of maximum length size.

During the second phase, the coordinating node requests the document content (and highlighted snippets, if any) from only the relevant shards.

GET my-index-000001/_search?search_type=query_then_fetch

This is the default setting, if you do not specify a search_type in your request.

Dfs, Query Then Fetch

edit

Parameter value: dfs_query_then_fetch.

Same as "Query Then Fetch", except for an initial scatter phase which goes and computes the distributed term frequencies for more accurate scoring.

GET my-index-000001/_search?search_type=dfs_query_then_fetch

Track total hits

edit

Generally the total hit count can’t be computed accurately without visiting all matches, which is costly for queries that match lots of documents. The track_total_hits parameter allows you to control how the total number of hits should be tracked. Given that it is often enough to have a lower bound of the number of hits, such as "there are at least 10000 hits", the default is set to 10,000. This means that requests will count the total hit accurately up to 10,000 hits. It’s is a good trade off to speed up searches if you don’t need the accurate number of hits after a certain threshold.

When set to true the search response will always track the number of hits that match the query accurately (e.g. total.relation will always be equal to "eq" when track_total_hits is set to true). Otherwise the "total.relation" returned in the "total" object in the search response determines how the "total.value" should be interpreted. A value of "gte" means that the "total.value" is a lower bound of the total hits that match the query and a value of "eq" indicates that "total.value" is the accurate count.

GET my-index-000001/_search
{
  "track_total_hits": true,
    "query": {
      "match" : {
        "user.id" : "elkbee"
      }
    }
}

... returns:

{
  "_shards": ...
  "timed_out": false,
  "took": 100,
  "hits": {
    "max_score": 1.0,
    "total" : {
      "value": 2048,    
      "relation": "eq"  
    },
    "hits": ...
  }
}

The total number of hits that match the query.

The count is accurate (e.g. "eq" means equals).

It is also possible to set track_total_hits to an integer. For instance the following query will accurately track the total hit count that match the query up to 100 documents:

GET my-index-000001/_search
{
  "track_total_hits": 100,
  "query": {
    "match": {
      "user.id": "elkbee"
    }
  }
}

The hits.total.relation in the response will indicate if the value returned in hits.total.value is accurate ("eq") or a lower bound of the total ("gte").

For instance the following response:

{
  "_shards": ...
  "timed_out": false,
  "took": 30,
  "hits": {
    "max_score": 1.0,
    "total": {
      "value": 42,         
      "relation": "eq"     
    },
    "hits": ...
  }
}

42 documents match the query

and the count is accurate ("eq")

... indicates that the number of hits returned in the total is accurate.

If the total number of hits that match the query is greater than the value set in track_total_hits, the total hits in the response will indicate that the returned value is a lower bound:

{
  "_shards": ...
  "hits": {
    "max_score": 1.0,
    "total": {
      "value": 100,         
      "relation": "gte"     
    },
    "hits": ...
  }
}

There are at least 100 documents that match the query

This is a lower bound ("gte").

If you don’t need to track the total number of hits at all you can improve query times by setting this option to false:

GET my-index-000001/_search
{
  "track_total_hits": false,
  "query": {
    "match": {
      "user.id": "elkbee"
    }
  }
}

... returns:

{
  "_shards": ...
  "timed_out": false,
  "took": 10,
  "hits": {             
    "max_score": 1.0,
    "hits": ...
  }
}

The total number of hits is unknown.

Finally you can force an accurate count by setting "track_total_hits" to true in the request.

Quickly check for matching docs

edit

If you only want to know if there are any documents matching a specific query, you can set the size to 0 to indicate that we are not interested in the search results. You can also set terminate_after to 1 to indicate that the query execution can be terminated whenever the first matching document was found (per shard).

GET /_search?q=user.id:elkbee&size=0&terminate_after=1

terminate_after is always applied after the post_filter and stops the query as well as the aggregation executions when enough hits have been collected on the shard. Though the doc count on aggregations may not reflect the hits.total in the response since aggregations are applied before the post filtering.

The response will not contain any hits as the size was set to 0. The hits.total will be either equal to 0, indicating that there were no matching documents, or greater than 0 meaning that there were at least as many documents matching the query when it was early terminated. Also if the query was terminated early, the terminated_early flag will be set to true in the response.

{
  "took": 3,
  "timed_out": false,
  "terminated_early": true,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped" : 0,
    "failed": 0
  },
  "hits": {
    "total" : {
        "value": 1,
        "relation": "eq"
    },
    "max_score": null,
    "hits": []
  }
}

The took time in the response contains the milliseconds that this request took for processing, beginning quickly after the node received the query, up until all search related work is done and before the above JSON is returned to the client. This means it includes the time spent waiting in thread pools, executing a distributed search across the whole cluster and gathering all the results.