New

The executive guide to generative AI

Read more

Nodes Shutdown

edit

The nodes shutdown API allows to shutdown one or more (or all) nodes in the cluster. Here is an example of shutting the _local node the request is directed to:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_local/_shutdown'

Specific node(s) can be shutdown as well using their respective node ids (or other selective options as explained here .):

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/nodeId1,nodeId2/_shutdown'

The master (of the cluster) can also be shutdown using:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_master/_shutdown'

Finally, all nodes can be shutdown using one of the options below:

$ curl -XPOST 'http://localhost:9200/_shutdown'

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_shutdown'

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_all/_shutdown'

Delay

edit

By default, the shutdown will be executed after a 1 second delay (1s). The delay can be customized by setting the delay parameter in a time value format. For example:

$ curl -XPOST 'http://localhost:9200/_cluster/nodes/_local/_shutdown?delay=10s'

Disable Shutdown

edit

The shutdown API can be disabled by setting action.disable_shutdown in the node configuration.

Rolling Restart of Nodes (Full Cluster Restart)

edit

A rolling restart allows the ES cluster to be restarted one node at a time, with no observable downtime for end users. To perform a rolling restart:

  • Disable shard reallocation (optional). This is done to allow for a faster startup after cluster shutdown. If this step is not performed, the nodes will immediately start trying to replicate shards to each other on startup and will spend a lot of time on wasted I/O. With shard reallocation disabled, the nodes will join the cluster with their indices intact, without attempting to rebalance. After startup is complete, reallocation will be turned back on.
curl -XPUT localhost:9200/_cluster/settings -d '{
                "transient" : {
                    "cluster.routing.allocation.enable" : "none"
                }
        }'
  • Shut down a single node within the cluster (if you have dedicated master nodes, start with these before the data nodes).
  • Start the node back up and confirm that it has rejoined the cluster
  • Re-enable shard reallocation.
   curl -XPUT localhost:9200/_cluster/settings -d '{
                "transient" : {
                    "cluster.routing.allocation.enable" : "all"
                }
        }'
  • Observe that all shards are properly allocated on all nodes. Balancing may take some time.
  • Repeat this process for all remaining nodes.
Was this helpful?
Feedback