- Installation and Upgrade Guide: other versions:
- Overview
- Installing the Elastic Stack
- Installing in an air-gapped environment
- Serverless changelog
- Breaking changes
- Release notes
- Upgrade to Elastic 9.0.0-beta1
Upgrade to Elastic 9.0.0-beta1
editUpgrade to Elastic 9.0.0-beta1
editUpgrading to the latest version provides you access to Elastic latest features, enhancements, performance improvements, and bug fixes, many of which enable you to save your organization money, respond faster to potential threats, and improve the tools you use to investigate and analyze your data. As new versions are released, older versions reach their end of life at a regular cadence, so it’s important to ensure that your deployment is fully maintained and supported. For more information, refer to Elastic’s Product End of Life Dates.
Plan your upgrade
editBefore you upgrade, there are a number of things that you need to plan for before performing the actual upgrade, so create a test plan. Consider the following recommendations:
- Plan for an appropriate amount of time to complete the upgrade. Depending on your configuration and the size of your cluster, the process can take up a few weeks or or more to complete.
- Consider opening a support case with Elastic to alert our Elastic Support team of your system change. If you need additional assistance, Elastic Consulting Services provides the technical expertise and step-by-step approach for upgrading your Elastic deployment.
- Schedule a system maintenance window within your organization.
Check system requirements
Ensure the version you’re upgrading to for Elasticsearch, Kibana, and any ingest components supports your current operating system. Refer to the Product and Operating System support matrix.
JVM and FIPS compliance
By default, Elasticsearch includes a bundled Java Virtual Machine (JVM) supported by Elastic. While we strongly recommend using the bundled JVM in all installations of Elasticsearch, if you choose to use your own JVM, ensure it’s compatible by reviewing the Product and JVM support matrix. Elasticsearch 9.0+ requires Java 21 or later.
If you’re running Elasticsearch in FIPS 140-2 mode, Elasticsearch 9.0+ has been tested with Bouncy Castle's FIPS implementation and is the recommended Java security provider when running Elasticsearch.
Conduct a component inventory
It is very important to map all the components that are being used on the Elastic Stack. When you upgrade your deployment, you may also need to upgrade all the other components. You should record if each component is used, and if it is, you should record its current version. While this is not a comprehensive list, here are some components you should check:
- Elasticsearch
- Elasticsearch Hadoop
- Elasticsearch Plugins
- Elasticsearch clients
- Kibana
- Logstash
- Logstash plugins
- Beats
- Beats modules
- APM agents
- APM server
- Elastic Agent
- Fleet
- Enterprise Search server
- Security
- Browsers
- External services (Kafka, etc.)
When you do your inventory, you can enable audit logging to evaluate resources that are accessing your deployment.
Choose your upgrade path
editThe procedures you follow to upgrade depend on your infrastructure and deployment method. You’ve installed Elastic components using either Elastic-managed infrastructure or self-managed infrastructure.
Elastic-managed infrastructure includes Elastic Cloud – the umbrella term for Elastic Cloud Hosted (ECH) and Elastic Cloud Serverless. Elastic Cloud Serverless (“Serverless”) is the fully managed cloud offering and has three products: Elasticsearch Serverless, Elastic Observability Serverless, and Elastic Security Serverless. All serverless products are built on top of the Search AI Lake. Customers receive the latest features automatically when updates are published.
Elastic Cloud Hosted is Elastic’s cloud offering for managing Elastic Stack deployments, built on top of Elasticsearch. A single click in the Elastic Cloud console can upgrade a deployment to a newer version.
Self-managed infrastructure — either on-prem or on public cloud, includes:
- Elastic Stack
- Elastic Cloud Enterprise (ECE)
- Elastic Cloud on Kubernetes (ECK)
For ECE and ECK, you need to ensure the orchestrator is running a compatible version with the Elastic Stack version you’re upgrading to. If not, you need to upgrade your orchestrator before you can upgrade your cluster.
If you’re running the Elastic Stack on your own self-managed infrastructure, you’ll need to upgrade each component individually.
Prepare to upgrade
editBefore you upgrade to 9.0.0-beta1, it’s important to take some preparation steps. Unless noted, these following recommendations are best practices regardless of deployment method.
Upgrading from a release candidate build, such as 9.0.0-rc1 or 9.0.0-rc2, is not supported. Pre-releases should only be used for testing in a temporary environment.
To upgrade to 9.0.0-beta1 from 8.17 or earlier, you must first upgrade to the latest patch version of 8.18. This enables you to use the Upgrade Assistant to identify and resolve issues, reindex indices created before 8.0, and then perform a rolling upgrade.
Upgrading to 8.18 before upgrading to 9.0.0-beta1 is required even if you opt to do a full-cluster restart of your Elasticsearch cluster.
Alternatively, you can create a new 9.0.0-beta1 deployment and reindex from remote. For more information, refer to Reindex to upgrade.
Beats and Logstash 8.17 are compatible with Elasticsearch 9.0.0-beta1 to give you flexibility in scheduling the upgrade.
Remote cluster compatibility
If you use cross-cluster search, note that 9.0.0-beta1 can only search remote clusters running the previous minor version or later. For more information, see Searching across clusters.
If you use cross-cluster replication, a cluster that contains follower indices must run the same or newer version as the remote cluster. For more information, see Cross cluster replication for version compatibility matrix.
You can view your remote clusters from Stack Management > Remote Clusters.
- Once you’ve upgraded to 8.18, use the Upgrade Assistant to prepare for your upgrade from 8.18 to 9.0.0-beta1. The Upgrade Assistant identifies deprecated settings and guides you through resolving issues and reindexing indices created before 8.0. Elasticsearch fully supports indices created in the current or previous major version. Older indices are partly supported as read-only archives with limited capabilities.
-
Ensure you have a current snapshot before making configuration changes or reindexing. You must resolve all critical issues before proceeding with the upgrade. If you make any additional changes, take a new snapshot to back up your data.
If you’re upgrading on Elastic Cloud Hosted, to keep your data safe during the upgrade process, a snapshot is taken automatically before any changes are made to your cluster. If you’re upgrading on Elastic Cloud Enterprise, you need to configure a snapshot repository to enable snapshots.
- Ensure you carefully review the deprecation logs from the Upgrade Assistant to determine if your applications are using features that are not supported or behave differently in 9.0.0-beta1.
-
Major version upgrades can include breaking changes that require you to take additional steps to ensure that your applications behave as expected after the upgrade. Review all 9.0.0-beta1 breaking changes for each product you use to view more information about changes that could affect your application. Ensure you test against the new version before upgrading existing deployments.
Ensure you check the breaking changes for each minor 8.x release up to 9.0.0-beta1.
-
Make the recommended changes to ensure that your clients continue to operate as expected after the upgrade.
As a temporary solution, you can submit requests to 9.x using the 8.x syntax with the REST API compatibility mode. While this enables you to submit requests that use the old syntax, it does not guarantee the same behavior. REST API compatibility should be a bridge to smooth out the upgrade process, not a long term strategy. For more information, see REST API compatibility.
- If you use any Elasticsearch plugins, ensure there is a version of each plugin that is compatible with 9.0.0-beta1.
-
We recommend creating a 9.0 test deployment and testing the upgrade in an isolated environment before upgrading your production deployment. Ensure that both your test and production environments have the same settings.
You cannot downgrade Elasticsearch nodes after upgrading. If you cannot complete the upgrade process, you will need to restore from the snapshot.
- If you use a separate monitoring cluster, you should upgrade the monitoring cluster before the production cluster. In general, the monitoring cluster and the clusters being monitored should be running the same version of the stack. A monitoring cluster cannot monitor production clusters running newer versions of the stack. If necessary, the monitoring cluster can monitor production clusters running the latest release of the previous major version.
Anomaly detection results migration
editThe anomaly detection result indices .ml-anomalies-*
created in Elasticsearch 7.x must be either reindexed, marked read-only, or deleted before upgrading to 9.x.
Reindexing: While anomaly detection results are being reindexed, jobs continue to run and process new data. However, you cannot completely delete an anomaly detection job that stores results in this index until the reindexing is complete.
Marking indices as read-only: This is useful for large indexes that contain the results of only one or a few anomaly detection jobs. You need to update or delete all obsolete model snapshots before using this option. If you delete these jobs later, you will not be able to create a new job with the same name.
Deleting: Delete jobs that are no longer needed in the Machine Learning app in Kibana. The result index is deleted when all jobs that store results in it have been deleted.
Which indices require attention?
To identify indices that require action, use the Deprecation info API:
GET /.ml-anomalies-*/_migration/deprecations
The response contains the list of critical deprecation warnings in the index_settings
section:
"index_settings": { ".ml-anomalies-shared": [ { "level": "critical", "message": "Index created before 8.0", "url": "https://ela.st/es-deprecation-8-reindex", "details": "This index was created with version 7.8.23 and is not compatible with 9.0. Reindex or remove the index before upgrading.", "resolve_during_rolling_upgrade": false } ] }
Reindexing anomaly result indices
For an index with less than 10GB that contains results from multiple jobs that are still required, we recommend reindexing into a new format using UI. You can use the Get index information API to obtain the size of an index:
GET _cat/indices/.ml-anomalies-custom-example?v&h=index,store.size
The reindexing can be initiated in the Kibana Upgrade Assistant.
If an index size is greater than 10 GB, it is recommended to use the Reindex API. Reindexing consists of the following steps:
-
Set the original index to read-only.
PUT .ml-anomalies-custom-example/_block/read_only
-
Create a new index from the legacy index.
POST _create_from/.ml-anomalies-custom-example/.reindexed-v9-ml-anomalies-custom-example
-
Reindex documents. To accelerate the reindexing process, it is recommended that the number of replicas be set to
0
before the reindexing and then set back to the original number once it is completed.-
Get the number of replicas.
GET /.reindexed-v9-ml-anomalies-custom-example/_settings
Note the number of replicas in the response. For example:
{ ".reindexed-v9-ml-anomalies-custom-example": { "settings": { "index": { "number_of_replicas": "1", "number_of_shards": "1" } } } }
-
Set the number of replicas to
0.
PUT /.reindexed-v9-ml-anomalies-custom-example/_settings { "index": { "number_of_replicas": 0 } }
-
Start the reindexing process in asynchronous mode.
POST _reindex?wait_for_completion=false { "source": { "index": ".ml-anomalies-custom-example" }, "dest": { "index": ".reindexed-v9-ml-anomalies-custom-example" } }
The response will contain a
task_id
. You can check when the task is completed using the following command:GET _tasks/<task_id>
-
Set the number of replicas to the original number when the reindexing is finished.
PUT /.reindexed-v9-ml-anomalies-custom-example/_settings { "index": { "number_of_replicas": "<original_number_of_replicas>" } }
-
-
Get the aliases the original index is pointing to.
GET .ml-anomalies-custom-example/_alias
The response may contain multiple aliases if the results of multiple jobs are stored in the same index.
{ ".ml-anomalies-custom-example": { "aliases": { ".ml-anomalies-example1": { "filter": { "term": { "job_id": { "value": "example1" } } }, "is_hidden": true }, ".ml-anomalies-example2": { "filter": { "term": { "job_id": { "value": "example2" } } }, "is_hidden": true } } } }
-
Now you can reassign the aliases to the new index and delete the original index in one step. Note that when adding the new index to the aliases, you must use the same
filter
andis_hidden
parameters as for the original index.POST _aliases { "actions": [ { "add": { "index": ".reindexed-v9-ml-anomalies-custom-example", "alias": ".ml-anomalies-example1", "filter": { "term": { "job_id": { "value": "example1" } } }, "is_hidden": true } }, { "add": { "index": ".reindexed-v9-ml-anomalies-custom-example", "alias": ".ml-anomalies-example2", "filter": { "term": { "job_id": { "value": "example2" } } }, "is_hidden": true } }, { "remove": { "index": ".ml-anomalies-custom-example", "aliases": ".ml-anomalies-*" } }, { "remove_index": { "index": ".ml-anomalies-custom-example" } }, { "add": { "index": ".reindexed-v9-ml-anomalies-custom-example", "alias": ".ml-anomalies-custom-example", "is_hidden": true } } ] }
Marking anomaly result indices as read-only
Legacy indices created in Elasticsearch 7.x can be made read-only and supported in Elasticsearch 9.x. Making an index with a large amount of historical results read-only allows for a quick migration to the next major release, since you don’t have to wait for the data to be reindexed into the new format. However, it has the limitation that even after deleting an anomaly detection job, the historical results associated with this job are not completely deleted. Therefore, the system will prevent you from creating a new job with the same name.
Be sure to resolve any obsolete model snapshot warnings before marking the index read-only.
To set the index as read-only, add the write block to the index:
PUT .ml-anomalies-custom-example/_block/write
Indices created in Elasticsearch 7.x that have a write block will not raise a critical deprecation warning.
Deleting anomaly result indices
If an index contains results of the jobs that are no longer required. To list all jobs that stored results in an index, use the terms aggregation:
GET .ml-anomalies-custom-example/_search { "size": 0, "aggs": { "job_ids": { "terms": { "field": "job_id", "size": 100 } } } }
The jobs can be deleted in the UI. After the last job is deleted, the index will be deleted as well.
Transform destination indices migration
editThe transform destination indices created in Elasticsearch 7.x must be either reset, reindexed, or deleted before upgrading to 9.x.
Resetting: You can reset the transform to delete all state, checkpoints, and the destination index (if it was created by the transform). The next time you start the transform, it will reprocess all data from the source index, creating a new destination index in Elasticsearch 8.x compatible with 9.x. However, if data had been deleted from the source index, you will lose all previously computed results that had been stored in the destination index.
Reindexing: You can reindex the destination index and then update the transform to write to the new destination index. This is useful if there are results that you want to retain that may not exist in the source index. To prevent the transform and reindex tasks from conflicting with one another, you can either pause the transform while the reindex runs, or you can write to the new destination index while the reindex backfills old results.
Deleting: You can delete any transform that are no longer being used. Once the transform is deleted, you can either delete the destination index or make it read-only.
Which indices require attention?
To identify indices that require action, use the Deprecation info API:
GET /_migration/deprecations
The response contains the list of critical deprecation warnings in the index_settings
section:
"index_settings": { "my-destination-index": [ { "level": "critical", "message": "One or more Transforms write to this index with a compatibility version < 9.0", "url": "https://www.elastic.co/guide/en/elasticsearch/reference/master/migrating-9.0.html#breaking_90_transform_destination_index", "details": "Transforms [my-transform] write to this index with version [7.8.23].", "resolve_during_rolling_upgrade": false } ] }
Resetting the transform
If the index was created by the transform, you can use the Transform Reset API to delete the destination index and recreate it the next time the transform runs.
If the index was not created by the transform, and you still want to reset it, you can manually delete and recreate the index, then call the Reset API.
POST _transform/my-transform/_reset
Reindexing the transform’s destination index while the transform is paused
When Kibana Upgrade Assistant reindexes the documents, Kibana will put a write block on the old destination index, copy the results to a new index, delete the old index, and create an alias to the new index. During this time, the transform will pause and wait for the destination to become writable again. If you do not want the transform to pause, continue to reindexing the transform’s destination index while the transform is running.
If an index is less than 10GB of size, we recommend using Kibana’s Upgrade Assistant to automatically migrate the index.
If an index size is greater than 10 GB it is recommended to use the Reindex API. Reindexing consists of the following steps:
You can use the Get index information API to obtain the size of an index:
GET _cat/indices/.transform-destination-example?v&h=index,store.size
The reindexing can be initiated in the Kibana Upgrade Assistant.
If an index size is greater than 10 GB, it is recommended to use the Reindex API. Reindexing consists of the following steps:
-
Set the original index to read-only.
PUT .transform-destination-example/_block/read_only
-
Create a new index from the legacy index.
POST _create_from/.transform-destination-example/.reindexed-v9-transform-destination-example
-
Reindex documents. To accelerate the reindexing process, it is recommended that the number of replicas be set to
0
before the reindexing and then set back to the original number once it is completed.-
Get the number of replicas.
GET /.reindexed-v9-transform-destination-example/_settings
Note the number of replicas in the response. For example:
{ ".reindexed-v9-transform-destination-example": { "settings": { "index": { "number_of_replicas": "1", "number_of_shards": "1" } } } }
-
Set the number of replicas to
0.
PUT /.reindexed-v9-transform-destination-example/_settings { "index": { "number_of_replicas": 0 } }
-
Start the reindexing process in asynchronous mode.
POST _reindex?wait_for_completion=false { "source": { "index": ".transform-destination-example" }, "dest": { "index": ".reindexed-v9-transform-destination-example" } }
The response will contain a
task_id
. You can check when the task is completed using the following command:GET _tasks/<task_id>
-
Set the number of replicas to the original number when the reindexing is finished.
PUT /.reindexed-v9-transform-destination-example/_settings { "index": { "number_of_replicas": "<original_number_of_replicas>" } }
-
-
Get the aliases the original index is pointing to.
GET .transform-destination-example/_alias
The response may contain multiple aliases if the results of multiple jobs are stored in the same index.
{ ".transform-destination-example": { "aliases": { ".transform-destination-example1": { "filter": { "term": { "job_id": { "value": "example1" } } }, "is_hidden": true }, ".transform-destination-example2": { "filter": { "term": { "job_id": { "value": "example2" } } }, "is_hidden": true } } } }
-
Now you can reassign the aliases to the new index and delete the original index in one step. Note that when adding the new index to the aliases, you must use the same
filter
andis_hidden
parameters as for the original index.POST _aliases { "actions": [ { "add": { "index": ".reindexed-v9-transform-destination-example", "alias": ".transform-destination-example1", "filter": { "term": { "job_id": { "value": "example1" } } }, "is_hidden": true } }, { "add": { "index": ".reindexed-v9-transform-destination-example", "alias": ".transform-destination-example2", "filter": { "term": { "job_id": { "value": "example2" } } }, "is_hidden": true } }, { "remove": { "index": ".transform-destination-example", "aliases": ".transform-destination-*" } }, { "remove_index": { "index": ".transform-destination-example" } }, { "add": { "index": ".reindexed-v9-transform-destination-example", "alias": ".transform-destination-example", "is_hidden": true } } ] }
Reindexing the transform’s destination index while the transform is running
If you want the transform and the reindex task to write documents to the new destination index at the same time:
-
Set the original index to read-only.
POST _create_from/my-destination-index/my-new-destination-index
-
Update the transform to write to the new destination index:
POST _transform/my-transform/_update { "dest": { "index": "my-new-destination-index" } }
-
Reindex documents. To accelerate the reindexing process, it is recommended that the number of replicas be set to 0 before the reindexing and then set back to the original number once it is completed.
-
Get the number of replicas.
GET /my-destination-index/_settings
-
Note the number of replicas in the response. For example:
{ "my-destination-index": { "settings": { "index": { "number_of_replicas": "1", "number_of_shards": "1" } } } }
-
Set the number of replicas to
0.
PUT /my-destination-index/_settings { "index": { "number_of_replicas": 0 } }
-
Start the reindexing process in asynchronous mode. Set the
op_type
tocreate
so the reindex does not overwrite work that the transform is doing.
POST _reindex { "conflicts": "proceed", "source": { "index": "my-destination-index" }, "dest": { "index": "my-new-destination-index", "op_type": "create" } }
+ The response will contain a
task_id
. You can check when the task is completed using the following command:+
GET _tasks/<task_id>
-
Set the number of replicas to the original number when the reindexing is finished.
PUT /my-new-destination-index/_settings { "index": { "number_of_replicas": "<original_number_of_replicas>" } }
-
Get the aliases the original index is pointing to.
GET my-destination-index/_alias { "my-destination-index": { "aliases": { "my-destination-alias": {}, } } }
-
Now you can reassign the aliases to the new index and delete the original index in one step. Note that when adding the new index to the aliases, you must use the same
filter
andis_hidden
parameters as for the original index.POST _aliases { "actions": [ { "add": { "index": "my-new-destination-index", "alias": "my-destination-alias" } }, { "remove": { "index": "my-destination-index", "aliases": "my-destination-alias" } }, { "remove_index": { "index": "my-destination-index" } } ] }
-
Deleting the transform
You can use the Transform Delete API to delete the transform and stop it from writing to the destination index.
DELETE _transform/my-transform
If the destination index is no longer needed, it can be deleted alongside the transform.
DELETE _transform/my-transform?delete_dest_index
On this page