Profiling Queries

edit

The details provided by the Profile API directly expose Lucene class names and concepts, which means that complete interpretation of the results require fairly advanced knowledge of Lucene. This page attempts to give a crash-course in how Lucene executes queries so that you can use the Profile API to successfully diagnose and debug queries, but it is only an overview. For complete understanding, please refer to Lucene’s documentation and, in places, the code.

With that said, a complete understanding is often not required to fix a slow query. It is usually sufficient to see that a particular component of a query is slow, and not necessarily understand why the advance phase of that query is the cause, for example.

query Section

edit

The query section contains detailed timing of the query tree executed by Lucene on a particular shard. The overall structure of this query tree will resemble your original Elasticsearch query, but may be slightly (or sometimes very) different. It will also use similar but not always identical naming. Using our previous match query example, let’s analyze the query section:

"query": [
    {
       "type": "BooleanQuery",
       "description": "message:message message:number",
       "time": "1.873811000ms",
       "breakdown": {...},               
       "children": [
          {
             "type": "TermQuery",
             "description": "message:message",
             "time": "0.3919430000ms",
             "breakdown": {...}
          },
          {
             "type": "TermQuery",
             "description": "message:number",
             "time": "0.2106820000ms",
             "breakdown": {...}
          }
       ]
    }
]

The breakdown timings are omitted for simplicity

Based on the profile structure, we can see that our match query was rewritten by Lucene into a BooleanQuery with two clauses (both holding a TermQuery). The "type" field displays the Lucene class name, and often aligns with the equivalent name in Elasticsearch. The "description" field displays the Lucene explanation text for the query, and is made available to help differentiating between parts of your query (e.g. both "message:search" and "message:test" are TermQuery’s and would appear identical otherwise.

The "time" field shows that this query took ~15ms for the entire BooleanQuery to execute. The recorded time is inclusive of all children.

The "breakdown" field will give detailed stats about how the time was spent, we’ll look at that in a moment. Finally, the "children" array lists any sub-queries that may be present. Because we searched for two values ("search test"), our BooleanQuery holds two children TermQueries. They have identical information (type, time, breakdown, etc). Children are allowed to have their own children.

Timing Breakdown

edit

The "breakdown" component lists detailed timing statistics about low-level Lucene execution:

"breakdown": {
   "score": 51306,
   "score_count": 4,
   "build_scorer": 2935582,
   "build_scorer_count": 1,
   "match": 0,
   "match_count": 0,
   "create_weight": 919297,
   "create_weight_count": 1,
   "next_doc": 53876,
   "next_doc_count": 5,
   "advance": 0,
   "advance_count": 0
}

Timings are listed in wall-clock nanoseconds and are not normalized at all. All caveats about the overall "time" apply here. The intention of the breakdown is to give you a feel for A) what machinery in Lucene is actually eating time, and B) the magnitude of differences in times between the various components. Like the overall time, the breakdown is inclusive of all children times.

The meaning of the stats are as follows:

All parameters:

edit

create_weight

A Query in Lucene must be capable of reuse across multiple IndexSearchers (think of it as the engine that executes a search against a specific Lucene Index). This puts Lucene in a tricky spot, since many queries need to accumulate temporary state/statistics associated with the index it is being used against, but the Query contract mandates that it must be immutable.

To get around this, Lucene asks each query to generate a Weight object which acts as a temporary context object to hold state associated with this particular (IndexSearcher, Query) tuple. The weight metric shows how long this process takes

build_scorer

This parameter shows how long it takes to build a Scorer for the query. A Scorer is the mechanism that iterates over matching documents generates a score per-document (e.g. how well does "foo" match the document?). Note, this records the time required to generate the Scorer object, not actually score the documents. Some queries have faster or slower initialization of the Scorer, depending on optimizations, complexity, etc.

This may also showing timing associated with caching, if enabled and/or applicable for the query

next_doc

The Lucene method next_doc returns Doc ID of the next document matching the query. This statistic shows the time it takes to determine which document is the next match, a process that varies considerably depending on the nature of the query. Next_doc is a specialized form of advance() which is more convenient for many queries in Lucene. It is equivalent to advance(docId() + 1)

advance

advance is the "lower level" version of next_doc: it serves the same purpose of finding the next matching doc, but requires the calling query to perform extra tasks such as identifying and moving past skips, etc. However, not all queries can use next_doc, so advance is also timed for those queries.

Conjunctions (e.g. must clauses in a boolean) are typical consumers of advance

matches

Some queries, such as phrase queries, match documents using a "Two Phase" process. First, the document is "approximately" matched, and if it matches approximately, it is checked a second time with a more rigorous (and expensive) process. The second phase verification is what the matches statistic measures.

For example, a phrase query first checks a document approximately by ensuring all terms in the phrase are present in the doc. If all the terms are present, it then executes the second phase verification to ensure the terms are in-order to form the phrase, which is relatively more expensive than just checking for presence of the terms.

Because this two-phase process is only used by a handful of queries, the metric statistic will often be zero

score

This records the time taken to score a particular document via it’s Scorer

*_count

Records the number of invocations of the particular method. For example, "next_doc_count": 2, means the nextDoc() method was called on two different documents. This can be used to help judge how selective queries are, by comparing counts between different query components.

collectors Section

edit

The Collectors portion of the response shows high-level execution details. Lucene works by defining a "Collector" which is responsible for coordinating the traversal, scoring and collection of matching documents. Collectors are also how a single query can record aggregation results, execute unscoped "global" queries, execute post-query filters, etc.

Looking at the previous example:

"collector": [
    {
       "name": "SimpleTopScoreDocCollector",
       "reason": "search_top_hits",
       "time": "0.06989100000ms"
    }
]

We see a single collector named SimpleTopScoreDocCollector. This is the default "scoring and sorting" Collector used by Elasticsearch. The "reason" field attempts to give a plain english description of the class name. The "time" is similar to the time in the Query tree: a wall-clock time inclusive of all children. Similarly, children lists all sub-collectors.

It should be noted that Collector times are independent from the Query times. They are calculated, combined and normalized independently! Due to the nature of Lucene’s execution, it is impossible to "merge" the times from the Collectors into the Query section, so they are displayed in separate portions.

For reference, the various collector reason’s are:

search_sorted

A collector that scores and sorts documents. This is the most common collector and will be seen in most simple searches

search_count

A collector that only counts the number of documents that match the query, but does not fetch the source. This is seen when size: 0 is specified

search_terminate_after_count

A collector that terminates search execution after n matching documents have been found. This is seen when the terminate_after_count query parameter has been specified

search_min_score

A collector that only returns matching documents that have a score greater than n. This is seen when the top-level parameter min_score has been specified.

search_multi

A collector that wraps several other collectors. This is seen when combinations of search, aggregations, global aggs and post_filters are combined in a single search.

search_timeout

A collector that halts execution after a specified period of time. This is seen when a timeout top-level parameter has been specified.

aggregation

A collector that Elasticsearch uses to run aggregations against the query scope. A single aggregation collector is used to collect documents for all aggregations, so you will see a list of aggregations in the name rather.

global_aggregation

A collector that executes an aggregation against the global query scope, rather than the specified query. Because the global scope is necessarily different from the executed query, it must execute it’s own match_all query (which you will see added to the Query section) to collect your entire dataset

rewrite Section

edit

All queries in Lucene undergo a "rewriting" process. A query (and its sub-queries) may be rewritten one or more times, and the process continues until the query stops changing. This process allows Lucene to perform optimizations, such as removing redundant clauses, replacing one query for a more efficient execution path, etc. For example a Boolean → Boolean → TermQuery can be rewritten to a TermQuery, because all the Booleans are unnecessary in this case.

The rewriting process is complex and difficult to display, since queries can change drastically. Rather than showing the intermediate results, the total rewrite time is simply displayed as a value (in nanoseconds). This value is cumulative and contains the total time for all queries being rewritten.

A more complex example

edit

To demonstrate a slightly more complex query and the associated results, we can profile the following query:

GET /test/_search
{
  "profile": true,
  "query": {
    "term": {
      "message": {
        "value": "search"
      }
    }
  },
  "aggs": {
    "non_global_term": {
      "terms": {
        "field": "agg"
      },
      "aggs": {
        "second_term": {
          "terms": {
            "field": "sub_agg"
          }
        }
      }
    },
    "another_agg": {
      "cardinality": {
        "field": "aggB"
      }
    },
    "global_agg": {
      "global": {},
      "aggs": {
        "my_agg2": {
          "terms": {
            "field": "globalAgg"
          }
        }
      }
    }
  },
  "post_filter": {
    "term": {
      "my_field": "foo"
    }
  }
}

This example has:

  • A query
  • A scoped aggregation
  • A global aggregation
  • A post_filter

And the response:

{
   "profile": {
         "shards": [
            {
               "id": "[P6-vulHtQRWuD4YnubWb7A][test][0]",
               "searches": [
                  {
                     "query": [
                        {
                           "type": "TermQuery",
                           "description": "my_field:foo",
                           "time": "0.4094560000ms",
                           "breakdown": {
                              "score": 0,
                              "score_count": 1,
                              "next_doc": 0,
                              "next_doc_count": 2,
                              "match": 0,
                              "match_count": 0,
                              "create_weight": 31584,
                              "create_weight_count": 1,
                              "build_scorer": 377872,
                              "build_scorer_count": 1,
                              "advance": 0,
                              "advance_count": 0
                           }
                        },
                        {
                           "type": "TermQuery",
                           "description": "message:search",
                           "time": "0.3037020000ms",
                           "breakdown": {
                              "score": 0,
                              "score_count": 1,
                              "next_doc": 5936,
                              "next_doc_count": 2,
                              "match": 0,
                              "match_count": 0,
                              "create_weight": 185215,
                              "create_weight_count": 1,
                              "build_scorer": 112551,
                              "build_scorer_count": 1,
                              "advance": 0,
                              "advance_count": 0
                           }
                        }
                     ],
                     "rewrite_time": 7208,
                     "collector": [
                        {
                           "name": "MultiCollector",
                           "reason": "search_multi",
                           "time": "1.378943000ms",
                           "children": [
                              {
                                 "name": "FilteredCollector",
                                 "reason": "search_post_filter",
                                 "time": "0.4036590000ms",
                                 "children": [
                                    {
                                       "name": "SimpleTopScoreDocCollector",
                                       "reason": "search_top_hits",
                                       "time": "0.006391000000ms"
                                    }
                                 ]
                              },
                              {
                                 "name": "BucketCollector: [[non_global_term, another_agg]]",
                                 "reason": "aggregation",
                                 "time": "0.9546020000ms"
                              }
                           ]
                        }
                     ]
                  },
                  {
                     "query": [
                        {
                           "type": "MatchAllDocsQuery",
                           "description": "*:*",
                           "time": "0.04829300000ms",
                           "breakdown": {
                              "score": 0,
                              "score_count": 1,
                              "next_doc": 3672,
                              "next_doc_count": 2,
                              "match": 0,
                              "match_count": 0,
                              "create_weight": 6311,
                              "create_weight_count": 1,
                              "build_scorer": 38310,
                              "build_scorer_count": 1,
                              "advance": 0,
                              "advance_count": 0
                           }
                        }
                     ],
                     "rewrite_time": 1067,
                     "collector": [
                        {
                           "name": "GlobalAggregator: [global_agg]",
                           "reason": "aggregation_global",
                           "time": "0.1226310000ms"
                        }
                     ]
                  }
               ]
            }
         ]
      }
}

As you can see, the output is significantly verbose from before. All the major portions of the query are represented:

  1. The first TermQuery (message:search) represents the main term query
  2. The second TermQuery (my_field:foo) represents the post_filter query
  3. There is a MatchAllDocsQuery (*:*) query which is being executed as a second, distinct search. This was not part of the query specified by the user, but is auto-generated by the global aggregation to provide a global query scope

The Collector tree is fairly straightforward, showing how a single MultiCollector wraps a FilteredCollector to execute the post_filter (and in turn wraps the normal scoring SimpleCollector), a BucketCollector to run all scoped aggregations. In the MatchAll search, there is a single GlobalAggregator to run the global aggregation.

Understanding MultiTermQuery output

edit

A special note needs to be made about the MultiTermQuery class of queries. This includes wildcards, regex and fuzzy queries. These queries emit very verbose responses, and are not overly structured.

Essentially, these queries rewrite themselves on a per-segment basis. If you imagine the wildcard query b*, it technically can match any token that begins with the letter "b". It would be impossible to enumerate all possible combinations, so Lucene rewrites the query in context of the segment being evaluated. E.g. one segment may contain the tokens [bar, baz], so the query rewrites to a BooleanQuery combination of "bar" and "baz". Another segment may only have the token [bakery], so query rewrites to a single TermQuery for "bakery".

Due to this dynamic, per-segment rewriting, the clean tree structure becomes distorted and no longer follows a clean "lineage" showing how one query rewrites into the next. At present time, all we can do is apologize, and suggest you collapse the details for that query’s children if it is too confusing. Luckily, all the timing statistics are correct, just not the physical layout in the response, so it is sufficient to just analyze the top-level MultiTermQuery and ignore it’s children if you find the details too tricky to interpret.

Hopefully this will be fixed in future iterations, but it is a tricky problem to solve and still in-progress :)