This Week in Elasticsearch and Apache Lucene - 2018-05-18
Elasticsearch
Highlights
One shard by default
New indices will receive one shard (instead of five) by default in 7.0.0. We made this change to help address a common problem that arises from the current defaults: oversharding. Many users end up having too many shards and we think that dropping the default to one will help address this situation. Users requiring more shards will have two straightforward solutions: first, with the split index API in place, users have the possibility to grow the number of shards in an index without an expensive reindex. Second, with time based-indices, users that find one shard is not enough to handle their write traffic can change their underlying templates and roll to the new configuration as they go. On the whole, we think that a large fraction of our users will be better served starting with one shard. As part of this breaking change, starting with 6.4.0 we will issue deprecation warnings on any create index request (implicit or explicit) that does not specify the number of shards.
Search Template Support in the High-level REST Client
Added support for the Search Template API to the high-level REST client--getting us closer to our stretch goal of implementing the most important API’s in the high-level REST client by 7.0.
Custom key and trust managers removed in support of FIPS
Merged the removal of our custom key and trust managers. In order to run a JVM in FIPS-approved mode, only SunJSSE TrustManagers and KeyManagers can be used. Our customer managers were replaced by SSLContext reloading, which enables our code to work with the Sun FIPS provider backed by NSS.
Faster histograms with non-fixed time zones
A couple months ago, we discovered that using non-fixed time zones (eg. Europe/Paris) caused significant slowdowns compared to fixed time zones (eg. +01:00), likely because of daylight savings. This is a significant issue for Kibana since date histograms are the most popular aggregation for Kibana users. We have been working on mitigating this issue by rewriting the time zone to a fixed time zone when timestamps in the shard do not cross any transition, which is very common for daily indices, the most common setup for time-based data.
Faster top-k queries with static scoring signals
One usage of top-k queries is when users wish to define a static component of rank inside of each document using a field. A straightforward example would be embedding a PageRank score inside of each document. While this can currently be implemented using a FunctionScoreQuery, that approach is less efficient as it will need to run the score computation against all matching documents. We added the ability for these static score components to be stored as term frequencies, which will allow Lucene to significantly optimize this use case.
Progress on cross-cluster replication feature
We are continuing to integrate Lucene as a source of operation history, which is an important prerequisite for cross-cluster replication. In the future, it may be possible to use Lucene for peer recovery.
We added the support necessary for our current cross-cluster replication code to work with x-pack security enabled. This work adds a new permission specific to the CCR feature, as well as defining what permissions a user must already have on indices in order to enable incoming or outgoing replication.
NIO-based transport
We continue to push on our new experimental transport layer with this week having added basic support for HTTP on top of the NIO implementation. This is the first step towards full-featured support for HTTP but is today missing pipelining and CORS.
Higher default filter cache count
The filter cache has two parameters that allow to configure its size, a maximum cache size in bytes and a maximum cache size in terms of number of cached filters. The latter parameter was required for heavy queries where the query itself consumed significant memory.
Recently, we introduced changes in Lucene that allow queries to opt out of caching and made it implemented by TermInSetQuery, which is the query behind Lucene's terms query --therefore these heavy queries will no longer be cached. As this should hopefully mitigate the issue described above, we are now considering raising the number of cached filters to improve caching efficiency.
Forbid expensive query parts in ranking evaluation
Currently the Rank Evaluation API accepts the full search API DSL in the request. This means that it potentially runs expensive and unnecessary aggregations, highlighters, suggesters etc. that have no bearing on the output score. To address this, we merged a change to forbid parts of the search request that are unnecessary in rank evaluation requests. This evolution of our experiment Rank Evaluation API will help ensure the stability of this feature moving forward.
Changes in 6.3:
- Improve explanation in rescore #30629
- Delay _uid field data deprecation warning #30651
- Template upgrades should happen in a system context #30621
- S3 repo plugin populate SettingsFilter #30652
- Move allocation awareness attributes to list setting #30626
- SQL: Verify GROUP BY ordering on grouped columns #30585
- Watcher: Prevent triggering watch when using activate API #30613
Changes in 6.4:
- Preserve REST client auth despite 401 response #30558
- User proper write-once semantics for GCS repository #30438
- Deprecate nGram and edgeNGram names for ngram filters #30209
- Watcher: Fix watch history template for dynamic slack attachments #30172
- Replace custom reloadable Key/TrustManager #30509
- Fix _cluster/state to always return cluster_uuid #30656
- Use readFully() to read bytes from CipherInputStream (#28515) #30640
- Add PUT Repository High Level REST API #30501
- Deprecate Empty Templates #30194
- Rest High Level client: Add List Tasks #29546
- Mitigate date histogram slowdowns with non-fixed timezones. #30534
- Add a MovingFunction pipeline aggregation, deprecate MovingAvg agg #29594
- Fix class cast exception in BucketMetricsPipeline path traversal #30632
- SAML: Process only signed data (#30420) #30641
- Repository GCS plugin new client library #30168
- Allow date math for naming newly-created snapshots (#7939) #30479
- Side-step pending deletes check #30571
- Add deprecation warning for default shards #30587
- Fold RestGetAllSettingsAction in RestGetSettingsAction #30561
- Auto-expand replicas only after failing nodes #30553
- Forbid expensive query parts in ranking evaluation #30151
- SQL: SYS TABLES ordered according to *DBC specs #30530
- Deprecate not copy settings and explicitly disallow #30404
Changes in 7.0:
- Add nio http server transport #29587
- Add support for search templates to the high-level REST client. #30473
- Fix issue with finishing handshake in ssl driver #30580
- BREAKING: Default to one shard #30539
Lucene
Lucene 7.3.1
Lucene 7.3.1 was released on May 15th.
More than 2B documents per index?
We started a discussion to remove the current limit of about 2 billion documents per index. This limit has occasionally been hit in the past, especially with small documents. While removing this limit at the segment level might not be reasonable, the considered option is to only remove the limit on top-level readers so that segments could keep relying on the fact that they contain less than 2B documents and address doc ids using 32-bit integers. For instance it would be possible to have 4B documents in an index via 4 segments with 1B documents each.
Better checks for pending deletes
The fact that Windows disallows to delete files that are open is sometimes problematic. For instance if a virus scanner is scanning a file of the index then Lucene is not able to delete it. In order to work around this issue, the directory tracks files that it failed to delete in order to retry later.
We had a check that the index directory had no pending deletes at the creation time of an IndexWriter, the goal being to make sure that we do not try to write to files that are in a pending-delete state. But it was best-effort only as it only worked on implementations of FSDirectory, so we first made it more strict by making it work across Directory implementations. However it proved impractical since it would require users to busy-wait until all files are actually deleted. So after further discussions, Simon changed the logic so that files that are in a pending delete state are also considered by IndexFileDeleter when it computes the next segment generation by adding 1 to the maximum file generation that already exists in the directory. This will still ensure that we do not overwrite existing files while being much more user-friendly.
Doc-value updates to non-existing fields
Simon removed the limitation that doc-value updates may only apply to existing fields. It is now possible to update a doc-value field if the field doesn't exist in the index yet.
Other
- A long-standing rare failure in TestStressNRT happened to be an actual bug that would publish a delete too soon. Phew!
- Can we support time-bounded search when the IndexSearcher searches segments in parallel?
- More iterations on the ConditionalTokenFilter.
- FeatureQuery now automatically computes a good pivot value at rewrite time instead of expecting users to provide a searcher explicitly.
- SimScorer is not bound to a field anymore.
- We should use impacts to speed up synonym and phrase queries.
- Should matches also expose the matched terms?