Instant Aggregations: Rewriting Queries for Fun and Profit
This is the final post in a three-part series about Instant Aggregations. Read how it all started in The Tale of Caching and Why It Matters from Simon Willnauer and Shay Banon and the meaty middle detailed in
The Great Query Refactoring: Thou Shalt Only Parse Once. Enjoy the trilogy!
In 1.4.0 Elasticsearch gained a shard level ‘Request Cache’ which caches the result of the query phase on each shard keyed on the search request itself. Until 5.0 this feature was disabled by default since it wasn’t as useful as it could be for most of the computationally heavy use cases. The cache was intended to make searches faster, especially for aggregations. One problem was that ordering in JSON is not deterministic so although two requests may be logically the same, when rendered to JSON strings they may not be equal. The other main problem is that these computationally heavy use cases tend to use time windows relative to the current time so subsequent requests will tend to have slightly different time ranges. Enabling it would likely be a waste memory for most of the users since they would rarely ever get a cache hit. So why would we add such a feature?
Here at Elastics, we try to follow the rule of “progress over perfection”. Even if we can’t utilize it’s full potential we get a step closer to our goals. Now after 2 years of engineering effort and refactoring the entire search request representation, we can finally take full advantage of it.
From 5.0 the request cache will be enabled by default for all requests with `size:0`. The request cache is most useful for analytics use cases and generally analytics use cases use search requests with `size: 0`.
So how did the changes made since 1.4.0 enable us to better use this feature?
The search request cache gives massive improvements in search performance when running the same search multiple times on a static index. However, search requests are rarely exactly the same, especially in the time series use case.
Lets look at some of the characteristics of typical search for time-series use cases:
- Queries have a time range component often relative to current time (e.g. from now-7d to now)
- Search requests span across multiple indices
- Indexing happens on the latest index, all other indices are static
- Same search requests are run multiple times with different time ranges
The above are common characteristics of search requests for a lot of time series use cases including common patterns of usage for Kibana. A lot of Kibana dashboards contain quite a few visualisations a lot of which are complex and can take a number of seconds for the dashboard request to be executed by Elasticsearch. Because search requests are rarely the same, the search request cache can’t be used effectively to improve search performance for these use cases.
Or can it?
How can we make the search request cache work when the time range is always moving? Let’s solve this by considering an example. Imagine we have the details for every residential house sale in the UK since 1995. We can index this into 21 yearly indices (e.g. house-prices-1995, house-prices-1996, …, house-prices-2016). For the purposes of this explanation let’s make each index only have 1 shard (though the same idea can easily be extended to multiple shard indices without modification) Now we can run queries against the indices by using requests like this:
```
GET house-prices-*/_search
{
"query": {
"range": {
"date": {
"gte": "2013-09-01",
"lt": "2016-03-01"
}
}
}
}
```
The figure below shows three of these such queries overlaid on the yearly indices. These queries are typical for dashboard style use cases in that they differ only by small amounts (imagine a refreshing dashboard)
A couple of things stand out from this diagram:
- All three queries match no documents in house-prices-2012 (highlighted red on the diagram)
- All three queries will match all documents in house-prices-2014 and house-prices-2015 (highlighted green on the diagram)
So the only indices which affect the matching documents between the three queries are house-prices-2013 and house-prices-2016.
If we could rewrite the queries on each shard based on the range of values present in that index we could make the queries able to actually utilize the request cache. Below is the diagram with the same three queries rewritten to make them more cachable in the search request cache:
You can see that in the diagram above, the first query is rewritten and run as a `match_none` query on the `house-prices-2012` indexes shard and a `match_all` query on the `house-prices-2014` and `house-prices-2015` indices shards. This means that when the second and third query is run it can use the cached results of the search on the `house-prices-2012`, `house-prices-2014` and `house-prices-2015` indices shards instead of actually running a search on those shards. So for the second and third queries we only need to actually search the shards on the `house-prices-2013` and `house-prices-2016` indices.
Also, even for the first query, running a `match_none` query instead of a range query will be faster since it will not need to try to lookup the date range in the index.
So how does this work in practice?
This feature not only relies on the search request cache mentioned at the beginning of this post, but it is also heavily based on the query refactoring that we described in a recent blog post. The reason for this is that in order to add the rewrite logic described above we need objects that represent the query and the search request that we can actually rewrite.
When the search request is received by the shard, we rewrite the entire request including `query` and `post_filter` QueryBuilder objects. Each type of query has it’s own implementation of QueryBuilder which will rewrite according to it’s own rules.
Compound queries, queries that themselves contain queries (such as `bool` and `constant_score`) will call rewrite on the queries they wrap. Most leaf queries, queries that do not contain other queries, currently do not contain any rewrite logic so just return themselves (to indicate they did not change).
The range query however, will check the minimum and maximum value of the field (we will call this the field range) and compare this to the range it contains. There are three cases that we care about when evaluating the rewrite for the range:
- The query range and the field range do not overlap at all - In this case we can rewrite this range as a `match_none` query so we return a `MatchNoneQueryBuilder`
- The query range contains the entire field range - In this case all documents which contain a value for this field will match the range so we can rewrite the range query as an unbounded range query (not a match_all query) and return it
- The query range partially overlaps the field range. There’s not much we can do to help in this case so we don’t rewrite and return the original range query.
Now that the search request has been rewritten we can check the cache with the rewritten version and either retrieve the cached result or execute the search and add the result to the cache, keyed on the rewritten search request. This has a nice side effect since the rewritten queries are “normalized” search requests that are semantically equivalent but differ in the order of keys in the json will now also produce cache hits.
Why not use `match_all` ?
In the case that the query range contains the entire field range you might think we could rewrite the `range` query as a `match_all` query. We can’t do this because we can’t assume that all documents have a value for the field. If we rewrite to a `match_all` query we would incorrectly match documents that have no value for the field. For this reason we instead rewrite the range to an unbounded range query (effectively [* TO *]) which still means the query is much more useful to the search request cache.
Is this all worth the trouble?
To answer this question we’ll go back to our yearly house price indices. This dataset contains 21 yearly indices containing 21,304,688 residential UK house sales from 1995 to 2016. I ran this fairly unscientific test on my Macbook Pro. Each index has a single shard. I created a Kibana dashboard showing 14 different visualisations relevant to the data mixing simple and complex visualisations. The top of the dashboard looks like the following:
With the request cache disabled, requests to elasticsearch for the dashboard to refresh takes 12s-14s. When the request cache is enabled this drops to ~100ms. The same request is now 100x faster!
But this is on completely static data so what happens when we are indexing data?
To test this I deleted the data from 2010 to 2016 and loaded it in date order whilst refreshing the dashboard:
Now the request to Elasticsearch for the dashboard refresh takes 60-200ms. So the query is still at least 50x faster than our original un-cached query! The refresh time varies depending on how much data is present in the changing index since the search on the latest index does not hit the cache. This is because as new data is indexed, the cache on the shards of the latest index is constantly being invalidated.
The Instant Aggregations feature enables much better caching of search requests by rewriting the query based on the data present on the shard. As of 5.0.0 the query is only rewritten for date range queries, but with this infrastructure in place it opens the door to the potential of rewriting other types of query to improve the request cache utilization. We even have the potential to rewrite aggregations to make them more cacheable! We have shown in this post that the potential performance improvements when we use this technique can be huge, so watch this space for more improvements in the future.