Size your shards
editSize your shards
editEach index in Elasticsearch is divided into one or more shards, each of which may be replicated across multiple nodes to protect against hardware failures. If you are using Data streams then each data stream is backed by a sequence of indices. There is a limit to the amount of data you can store on a single node so you can increase the capacity of your cluster by adding nodes and increasing the number of indices and shards to match. However, each index and shard has some overhead and if you divide your data across too many shards then the overhead can become overwhelming. A cluster with too many indices or shards is said to suffer from oversharding. An oversharded cluster will be less efficient at responding to searches and in extreme cases it may even become unstable.
Create a sharding strategy
editThe best way to prevent oversharding and other shard-related issues is to create a sharding strategy. A sharding strategy helps you determine and maintain the optimal number of shards for your cluster while limiting the size of those shards.
Unfortunately, there is no one-size-fits-all sharding strategy. A strategy that works in one environment may not scale in another. A good sharding strategy must account for your infrastructure, use case, and performance expectations.
The best way to create a sharding strategy is to benchmark your production data on production hardware using the same queries and indexing loads you’d see in production. For our recommended methodology, watch the quantitative cluster sizing video. As you test different shard configurations, use Kibana’s Elasticsearch monitoring tools to track your cluster’s stability and performance.
The following sections provide some reminders and guidelines you should consider when designing your sharding strategy. If your cluster is already oversharded, see Reduce a cluster’s shard count.
Sizing considerations
editKeep the following things in mind when building your sharding strategy.
Searches run on a single thread per shard
editMost searches hit multiple shards. Each shard runs the search on a single CPU thread. While a shard can run multiple concurrent searches, searches across a large number of shards can deplete a node’s search thread pool. This can result in low throughput and slow search speeds.
Each index and shard has overhead
editEvery index and every shard requires some memory and CPU resources. In most cases, a small set of large shards uses fewer resources than many small shards.
Segments play a big role in a shard’s resource usage. Most shards contain several segments, which store its index data. Elasticsearch keeps segment metadata in JVM heap memory so it can be quickly retrieved for searches. As a shard grows, its segments are merged into fewer, larger segments. This decreases the number of segments, which means less metadata is kept in heap memory.
Every mapped field also carries some overhead in terms of memory usage and disk space. By default Elasticsearch will automatically create a mapping for every field in every document it indexes, but you can switch off this behaviour to take control of your mappings.
Elasticsearch automatically balances shards within a data tier
editA cluster’s nodes are grouped into data tiers. Within each tier, Elasticsearch attempts to spread an index’s shards across as many nodes as possible. When you add a new node or a node fails, Elasticsearch automatically rebalances the index’s shards across the tier’s remaining nodes.
Best practices
editWhere applicable, use the following best practices as starting points for your sharding strategy.
Delete indices, not documents
editDeleted documents aren’t immediately removed from Elasticsearch’s file system. Instead, Elasticsearch marks the document as deleted on each related shard. The marked document will continue to use resources until it’s removed during a periodic segment merge.
When possible, delete entire indices instead. Elasticsearch can immediately remove deleted indices directly from the file system and free up resources.
Use data streams and ILM for time series data
editData streams let you store time series data across multiple, time-based backing indices. You can use index lifecycle management (ILM) to automatically manage these backing indices.
One advantage of this setup is
automatic rollover, which creates
a new write index when the current one meets a defined max_primary_shard_size
,
max_age
, max_docs
, or max_size
threshold. When an index is no longer
needed, you can use ILM to automatically delete it and free up resources.
ILM also makes it easy to change your sharding strategy over time:
-
Want to decrease the shard count for new indices?
Change theindex.number_of_shards
setting in the data stream’s matching index template. -
Want larger shards or fewer backing indices?
Increase your ILM policy’s rollover threshold. -
Need indices that span shorter intervals?
Offset the increased shard count by deleting older indices sooner. You can do this by lowering themin_age
threshold for your policy’s delete phase.
Every new backing index is an opportunity to further tune your strategy.
Aim for shard sizes between 10GB and 50GB
editLarger shards take longer to recover after a failure. When a node fails, Elasticsearch rebalances the node’s shards across the data tier’s remaining nodes. This recovery process typically involves copying the shard contents across the network, so a 100GB shard will take twice as long to recover than a 50GB shard. In contrast, small shards carry proportionally more overhead and are less efficient to search. Searching fifty 1GB shards will take substantially more resources than searching a single 50GB shard containing the same data.
There are no hard limits on shard size, but experience shows that shards between 10GB and 50GB typically work well for logs and time series data. You may be able to use larger shards depending on your network and use case. Smaller shards may be appropriate for Enterprise Search and similar use cases.
If you use ILM, set the rollover action's
max_primary_shard_size
threshold to 50gb
to avoid shards larger than 50GB.
To see the current size of your shards, use the cat shards API.
GET _cat/shards?v=true&h=index,prirep,shard,store&s=prirep,store&bytes=gb
The pri.store.size
value shows the combined size of all primary shards for
the index.
index prirep shard store .ds-my-data-stream-2099.05.06-000001 p 0 50gb ...
Aim for 20 shards or fewer per GB of heap memory
editThe number of shards a data node can hold is proportional to the node’s heap memory. For example, a node with 30GB of heap memory should have at most 600 shards. The further below this limit you can keep your nodes, the better. If you find your nodes exceeding more than 20 shards per GB, consider adding another node.
Some system indices for Enterprise Search are nearly empty and rarely used. Due to their low overhead, you shouldn’t count shards for these indices toward a node’s shard limit.
To check the configured size of each node’s heap, use the cat nodes API.
GET _cat/nodes?v=true&h=heap.max
You can use the cat shards API to check the number of shards per node.
GET _cat/shards?v=true
Avoid node hotspots
editIf too many shards are allocated to a specific node, the node can become a hotspot. For example, if a single node contains too many shards for an index with a high indexing volume, the node is likely to have issues.
To prevent hotspots, use the
index.routing.allocation.total_shards_per_node
index
setting to explicitly limit the number of shards on a single node. You can
configure index.routing.allocation.total_shards_per_node
using the
update index settings API.
PUT my-index-000001/_settings { "index" : { "routing.allocation.total_shards_per_node" : 5 } }
Avoid unnecessary mapped fields
editBy default Elasticsearch automatically creates a mapping for every
field in every document it indexes. Every mapped field corresponds to some data
structures on disk which are needed for efficient search, retrieval, and
aggregations on this field. Details about each mapped field are also held in
memory. In many cases this overhead is unnecessary because a field is not used
in any searches or aggregations. Use Explicit mapping instead of dynamic
mapping to avoid creating fields that are never used. If a collection of fields
are typically used together, consider using copy_to
to consolidate them at
index time. If a field is only rarely used, it may be better to make it a
Runtime field instead.
You can get information about which fields are being used with the Field usage stats API, and you can analyze the disk usage of mapped fields using the Analyze index disk usage API. Note however that unnecessary mapped fields also carry some memory overhead as well as their disk usage.
Reduce a cluster’s shard count
editIf your cluster is already oversharded, you can use one or more of the following methods to reduce its shard count.
Create indices that cover longer time periods
editIf you use ILM and your retention policy allows it, avoid using a
max_age
threshold for the rollover action. Instead, use
max_primary_shard_size
to avoid creating empty indices or many small shards.
If your retention policy requires a max_age
threshold, increase it to create
indices that cover longer time intervals. For example, instead of creating daily
indices, you can create indices on a weekly or monthly basis.
Delete empty or unneeded indices
editIf you’re using ILM and roll over indices based on a max_age
threshold,
you can inadvertently create indices with no documents. These empty indices
provide no benefit but still consume resources.
You can find these empty indices using the cat count API.
GET _cat/count/my-index-000001?v=true
Once you have a list of empty indices, you can delete them using the delete index API. You can also delete any other unneeded indices.
DELETE my-index-000001
Force merge during off-peak hours
editIf you no longer write to an index, you can use the force merge API to merge smaller segments into larger ones. This can reduce shard overhead and improve search speeds. However, force merges are resource-intensive. If possible, run the force merge during off-peak hours.
POST my-index-000001/_forcemerge
Shrink an existing index to fewer shards
editIf you no longer write to an index, you can use the shrink index API to reduce its shard count.
ILM also has a shrink action for indices in the warm phase.
Combine smaller indices
editYou can also use the reindex API to combine indices
with similar mappings into a single large index. For time series data, you could
reindex indices for short time periods into a new index covering a
longer period. For example, you could reindex daily indices from October with a
shared index pattern, such as my-index-2099.10.11
, into a monthly
my-index-2099.10
index. After the reindex, delete the smaller indices.
POST _reindex { "source": { "index": "my-index-2099.10.*" }, "dest": { "index": "my-index-2099.10" } }
Troubleshoot shard-related errors
editHere’s how to resolve common shard-related errors.
this action would add [x] total shards, but this cluster currently has [y]/[z] maximum shards open;
editThe cluster.max_shards_per_node
cluster
setting limits the maximum number of open shards for a cluster. This error
indicates an action would exceed this limit.
If you’re confident your changes won’t destabilize the cluster, you can temporarily increase the limit using the cluster update settings API and retry the action.
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": 1200 } }
This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or reduce your cluster’s shard count. To get a cluster’s current shard count after making changes, use the cluster stats API.
GET _cluster/stats?filter_path=indices.shards.total
When a long-term solution is in place, we recommend you reset the
cluster.max_shards_per_node
limit.
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": null } }