- Elasticsearch Guide: other versions:
- Getting Started
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Installing X-Pack
- Set up X-Pack
- Configuring X-Pack Java Clients
- X-Pack Settings
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Children Aggregation
- Composite Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Split Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Standard Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- Whitespace Tokenizer
- UAX URL Email Tokenizer
- Classic Tokenizer
- Thai Tokenizer
- NGram Tokenizer
- Edge NGram Tokenizer
- Keyword Tokenizer
- Pattern Tokenizer
- Char Group Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Multiplexer Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Exclude mode settings example
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- Minhash Token Filter
- Remove Duplicates Token Filter
- Character Filters
- Modules
- Index Modules
- Ingest Node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Fail Processor
- Foreach Processor
- Grok Processor
- Gsub Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- Dot Expander Processor
- URL Decode Processor
- SQL Access
- Monitor a cluster
- Rolling up historical data
- Secure a cluster
- Overview
- Configuring Security
- Encrypting communications in Elasticsearch
- Encrypting Communications in an Elasticsearch Docker Container
- Enabling cipher suites for stronger encryption
- Separating node-to-node and client traffic
- Configuring an Active Directory realm
- Configuring a file realm
- Configuring an LDAP realm
- Configuring a native realm
- Configuring a PKI realm
- Configuring a SAML realm
- Configuring a Kerberos realm
- FIPS 140-2
- Security settings
- Auditing settings
- Getting started with security
- How security works
- User authentication
- Built-in users
- Internal users
- Realms
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- User authorization
- Auditing security events
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, tribe, clients, and integrations
- Reference
- Troubleshooting
- Can’t log in after upgrading to 6.4.3
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on Cluster and Index Events
- X-Pack APIs
- Info API
- Explore API
- Licensing APIs
- Migration APIs
- Machine Learning APIs
- Add Events to Calendar
- Add Jobs to Calendar
- Close Jobs
- Create Calendar
- Create Datafeeds
- Create Filter
- Create Jobs
- Delete Calendar
- Delete Datafeeds
- Delete Events from Calendar
- Delete Filter
- Delete Jobs
- Delete Jobs from Calendar
- Delete Model Snapshots
- Flush Jobs
- Forecast Jobs
- Get Calendars
- Get Buckets
- Get Overall Buckets
- Get Categories
- Get Datafeeds
- Get Datafeed Statistics
- Get Influencers
- Get Jobs
- Get Job Statistics
- Get Model Snapshots
- Get Scheduled Events
- Get Filters
- Get Records
- Open Jobs
- Post Data to Jobs
- Preview Datafeeds
- Revert Model Snapshots
- Start Datafeeds
- Stop Datafeeds
- Update Datafeeds
- Update Filter
- Update Jobs
- Update Model Snapshots
- Rollup APIs
- Security APIs
- Create or update application privileges API
- Authenticate API
- Change passwords API
- Clear Cache API
- Create or update role mappings API
- Clear roles cache API
- Create or update roles API
- Create or update users API
- Delete application privileges API
- Delete role mappings API
- Delete roles API
- Delete users API
- Disable users API
- Enable users API
- Get application privileges API
- Get role mappings API
- Get roles API
- Get token API
- Get users API
- Has Privileges API
- Invalidate token API
- SSL Certificate API
- Watcher APIs
- Definitions
- Command line tools
- How To
- Testing
- Glossary of terms
- Release Highlights
- Breaking changes
- Release Notes
- Elasticsearch version 6.4.3
- Elasticsearch version 6.4.2
- Elasticsearch version 6.4.1
- Elasticsearch version 6.4.0
- Elasticsearch version 6.3.2
- Elasticsearch version 6.3.1
- Elasticsearch version 6.3.0
- Elasticsearch version 6.2.4
- Elasticsearch version 6.2.3
- Elasticsearch version 6.2.2
- Elasticsearch version 6.2.1
- Elasticsearch version 6.2.0
- Elasticsearch version 6.1.4
- Elasticsearch version 6.1.3
- Elasticsearch version 6.1.2
- Elasticsearch version 6.1.1
- Elasticsearch version 6.1.0
- Elasticsearch version 6.0.1
- Elasticsearch version 6.0.0
- Elasticsearch version 6.0.0-rc2
- Elasticsearch version 6.0.0-rc1
- Elasticsearch version 6.0.0-beta2
- Elasticsearch version 6.0.0-beta1
- Elasticsearch version 6.0.0-alpha2
- Elasticsearch version 6.0.0-alpha1
- Elasticsearch version 6.0.0-alpha1 (Changes previously released in 5.x)
Split Index
editSplit Index
editThe split index API allows you to split an existing index into a new index, where each original primary shard is split into two or more primary shards in the new index.
The _split
API requires the source index to be created with a
specific number_of_routing_shards
in order to be split in the future. This
requirement has been removed in Elasticsearch 7.0.
The number of times the index can be split (and the number of shards that each
original shard can be split into) is determined by the
index.number_of_routing_shards
setting. The number of routing shards
specifies the hashing space that is used internally to distribute documents
across shards with consistent hashing. For instance, a 5 shard index with
number_of_routing_shards
set to 30
(5 x 2 x 3
) could be split by a
factor of 2
or 3
. In other words, it could be split as follows:
-
5
→10
→30
(split by 2, then by 3) -
5
→15
→30
(split by 3, then by 2) -
5
→30
(split by 6)
How does splitting work?
editSplitting works as follows:
- First, it creates a new target index with the same definition as the source index, but with a larger number of primary shards.
- Then it hard-links segments from the source index into the target index. (If the file system doesn’t support hard-linking, then all segments are copied into the new index, which is a much more time consuming process.)
-
Once the low level files are created all documents will be
hashed
again to delete documents that belong to a different shard. - Finally, it recovers the target index as though it were a closed index which had just been re-opened.
Why doesn’t Elasticsearch support incremental resharding?
editGoing from N
shards to N+1
shards, aka. incremental resharding, is indeed a
feature that is supported by many key-value stores. Adding a new shard and
pushing new data to this new shard only is not an option: this would likely be
an indexing bottleneck, and figuring out which shard a document belongs to
given its _id
, which is necessary for get, delete and update requests, would
become quite complex. This means that we need to rebalance existing data using
a different hashing scheme.
The most common way that key-value stores do this efficiently is by using
consistent hashing. Consistent hashing only requires 1/N
-th of the keys to
be relocated when growing the number of shards from N
to N+1
. However
Elasticsearch’s unit of storage, shards, are Lucene indices. Because of their
search-oriented data structure, taking a significant portion of a Lucene index,
be it only 5% of documents, deleting them and indexing them on another shard
typically comes with a much higher cost than with a key-value store. This cost
is kept reasonable when growing the number of shards by a multiplicative factor
as described in the above section: this allows Elasticsearch to perform the
split locally, which in-turn allows to perform the split at the index level
rather than reindexing documents that need to move, as well as using hard links
for efficient file copying.
In the case of append-only data, it is possible to get more flexibility by
creating a new index and pushing new data to it, while adding an alias that
covers both the old and the new index for read operations. Assuming that the
old and new indices have respectively M
and N
shards, this has no overhead
compared to searching an index that would have M+N
shards.
Preparing an index for splitting
editCreate an index with a routing shards factor:
PUT my_source_index { "settings": { "index.number_of_shards" : 1, "index.number_of_routing_shards" : 2 } }
Allows to split the index into two shards or in other words, it allows for a single split operation. |
In order to split an index, the index must be marked as read-only,
and have health green
.
This can be achieved with the following request:
Prevents write operations to this index while still allowing metadata changes like deleting the index. |
Splitting an index
editTo split my_source_index
into a new index called my_target_index
, issue
the following request:
POST my_source_index/_split/my_target_index?copy_settings=true { "settings": { "index.number_of_shards": 2 } }
The above request returns immediately once the target index has been added to the cluster state — it doesn’t wait for the split operation to start.
Indices can only be split if they satisfy the following requirements:
- the target index must not exist
- The index must have less primary shards than the target index.
- The number of primary shards in the target index must be a factor of the number of primary shards in the source index.
- The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.
The _split
API is similar to the create index
API
and accepts settings
and aliases
parameters for the target index:
POST my_source_index/_split/my_target_index?copy_settings=true { "settings": { "index.number_of_shards": 5 }, "aliases": { "my_search_indices": {} } }
The number of shards in the target index. This must be a factor of the number of shards in the source index. |
Mappings may not be specified in the _split
request.
By default, with the exception of index.analysis
, index.similarity
,
and index.sort
settings, index settings on the source index are not copied
during a split operation. With the exception of non-copyable settings, settings
from the source index can be copied to the target index by adding the URL
parameter copy_settings=true
to the request. Note that copy_settings
can not
be set to false
. The parameter copy_settings
will be removed in 8.0.0
[6.4.0] Deprecated in 6.4.0. not copying settings is deprecated, copying settings will be the default behavior in 7.x
Monitoring the split process
editThe split process can be monitored with the _cat recovery
API, or the cluster health
API can be used to wait
until all primary shards have been allocated by setting the wait_for_status
parameter to yellow
.
The _split
API returns as soon as the target index has been added to the
cluster state, before any shards have been allocated. At this point, all
shards are in the state unassigned
. If, for any reason, the target index
can’t be allocated, its primary shard will remain unassigned
until it
can be allocated on that node.
Once the primary shard is allocated, it moves to state initializing
, and the
split process begins. When the split operation completes, the shard will
become active
. At that point, Elasticsearch will try to allocate any
replicas and may decide to relocate the primary shard to another node.
Wait For Active Shards
editBecause the split operation creates a new index to split the shards to, the wait for active shards setting on index creation applies to the split index action as well.
On this page