Connect an ECK-managed cluster to an external cluster or deployment
These steps describe how to configure a remote cluster connection from an Elasticsearch cluster managed by Elastic Cloud on Kubernetes (ECK) to an external Elasticsearch cluster, not managed by ECK. The remote cluster can be self-managed, or part of an Elastic Cloud Hosted (ECH) or Elastic Cloud Enterprise (ECE) deployment.
After the connection is established, you’ll be able to run CCS queries from Elasticsearch or set up CCR.
In the case of remote clusters, the Elasticsearch cluster or deployment initiating the connection and requests is often referred to as the local cluster, while the Elasticsearch cluster or deployment receiving the requests is referred to as the remote cluster.
In this scenario, most of the configuration must be performed manually, as Elastic Cloud on Kubernetes cannot orchestrate the setup across both clusters. For fully automated configuration between ECK-managed clusters, refer to Connect to Elasticsearch clusters in the same Elastic Cloud on Kubernetes environment.
For other remote cluster scenarios with ECK, such as connecting clusters in different ECK environments, refer to Remote clusters on ECK.
Follow these steps to configure the API key security model for remote clusters. If you run into any issues, refer to Troubleshooting.
For the deprecated TLS certificate–based authentication model, the steps to allow the remote connection and establish mutual trust between clusters are effectively the same regardless of which cluster acts as the local or remote one. Once trust is established, remote connections can be configured in either direction.
Because of this, if you want to configure TLS certificate–based authentication for any of the scenarios covered in this guide, refer to:
Follow the steps corresponding to the deployment type of your remote cluster:
If the remote cluster is part of an Elastic Cloud Hosted deployment, the remote cluster server is enabled by default and it uses a publicly trusted certificate provided by the platform proxies. Therefore, you can skip this step.
If the remote cluster is part of an Elastic Cloud Enterprise deployment, the remote cluster server is enabled by default, and secured with TLS certificates.
Depending on the type of certificate used by the ECE proxies or load-balancing layer, the local cluster requires the associated certificate authority (CA) to establish trust:
If your ECE proxies use publicly trusted certificates, no additional CA is required.
If your ECE proxies use certificates signed by a private CA, retrieve the root CA from the ECE Cloud UI:
In the remote ECE environment, go to Platform > Settings > TLS certificates.
Under Proxy, select Show certificate chain.
Click Copy root certificate and paste it into a new file. The root certificate is the last certificate shown in the chain.
Save the file as
.crt, and keep it available for the trust configuration on the local cluster.
By default, the remote cluster server interface is not enabled on self-managed clusters. Follow the steps below to enable the interface:
Enable the remote cluster server on every node of the remote cluster. In
elasticsearch.yml:- Set
remote_cluster_server.enabledtotrue. - Configure the bind and publish address for remote cluster server traffic, for example using
remote_cluster.host. Without configuring the address, remote cluster traffic can be bound to the local interface, and remote clusters running on other machines can't connect. - Optionally, configure the remote server port using
remote_cluster.port(defaults to9443).
- Set
Generate a certificate authority (CA) and a server certificate/key pair. On one of the nodes of the remote cluster, from the directory where Elasticsearch has been installed:
Create a CA, if you don't have a CA already:
./bin/elasticsearch-certutil ca --pem --out=cross-cluster-ca.zip --pass CA_PASSWORDReplace
CA_PASSWORDwith the password you want to use for the CA. You can remove the--passoption and its argument if you are not deploying to a production environment.Unzip the generated
cross-cluster-ca.zipfile. This compressed file contains the following content:/ca |_ ca.crt |_ ca.keyGenerate a certificate and private key pair for the nodes in the remote cluster:
./bin/elasticsearch-certutil cert --out=cross-cluster.p12 --pass=CERT_PASSWORD --ca-cert=ca/ca.crt --ca-key=ca/ca.key --ca-pass=CA_PASSWORD --dns=<CLUSTER_FQDN> --ip=192.0.2.1- Replace
CA_PASSWORDwith the CA password from the previous step. - Replace
CERT_PASSWORDwith the password you want to use for the generated private key. - Use the
--dnsoption to specify the relevant DNS name for the certificate. You can specify it multiple times for multiple DNS. - Use the
--ipoption to specify the relevant IP address for the certificate. You can specify it multiple times for multiple IP addresses.
- Replace
If the remote cluster has multiple nodes, you can do one of the following:
- Create a single wildcard certificate for all nodes.
- Create separate certificates for each node either manually or in batch with the silent mode.
On every node of the remote cluster, do the following:
Copy the
cross-cluster.p12file from the earlier step to theconfigdirectory. If you didn't create a wildcard certificate, make sure you copy the correct node-specific p12 file.Add following configuration to
elasticsearch.yml:xpack.security.remote_cluster_server.ssl.enabled: true xpack.security.remote_cluster_server.ssl.keystore.path: cross-cluster.p12Add the SSL keystore password to the Elasticsearch keystore:
./bin/elasticsearch-keystore add xpack.security.remote_cluster_server.ssl.keystore.secure_passwordWhen prompted, enter the
CERT_PASSWORDfrom the earlier step.
Restart the remote cluster.
If the remote cluster server is exposed with a certificate signed by private certificate authority (CA), save the corresponding ca.crt file. It is required when configuring trust on the local cluster.
- On the remote cluster, use the Elasticsearch API or Kibana to create a cross-cluster API key. Configure it to include access to the indices you want to use for cross-cluster search or cross-cluster replication.
- Copy the encoded key (
encodedin the response) to a safe location. It is required for the local cluster configuration.
The API key created previously is needed by the local cluster to authenticate with the corresponding set of permissions to the remote deployment or cluster. To enable this, add the API key to the local cluster's keystore.
The steps to follow depend on whether the certificate authority (CA) presented by the remote cluster server, proxy, or load-balancing infrastructure is publicly trusted or private.
If the remote cluster is part of an Elastic Cloud Hosted deployment, follow the The CA is public path. Elastic Cloud Hosted proxies use publicly trusted certificates, so no CA configuration is required.
The CA is public
Store the API key encoded value in a Secret
The following command creates a secret containing the encoded API key obtained earlier:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: remote-api-keys type: Opaque stringData: cluster.remote.<remote-cluster-name>.credentials: <encoded value> EOF- For the
<remote-cluster-name>, enter the alias of your choice. This alias is used when connecting to the remote cluster. It must be lowercase and only contain letters, numbers, dashes, and underscores.
- For the
Configure the Elasticsearch resource
Update the Elasticsearch manifest to:
- Load the API key from the previously created secret using
secureSettings - Enable the remote cluster SSL client in the
configsection of eachnodeSet
apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: <local-cluster-name> spec: version: 9.2.4 secureSettings: - secretName: remote-api-keys nodeSets: - name: default count: 3 config: xpack: security: remote_cluster_client: ssl: enabled: true- The secret name must match the secret created in the previous step.
- Repeat this configuration for all
nodeSets.
- Load the API key from the previously created secret using
The CA is private
Store the API key encoded value in a Secret
The following command creates a secret containing the encoded API key obtained earlier:
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: remote-api-keys type: Opaque stringData: cluster.remote.<remote-cluster-name>.credentials: <encoded value> EOF- For the
<remote-cluster-name>, enter the alias of your choice. This alias is used when connecting to the remote cluster. It must be lowercase and only contain letters, numbers, dashes, and underscores.
- For the
Store the CA certificate in a ConfigMap or Secret
Store the CA certificate retrieved earlier in a ConfigMap or Secret. The following example creates a ConfigMap named
remote-cathat stores the content of a local file (my-ca.crt) under theremote-cluster-ca.crtkey:kubectl create configmap remote-ca -n <namespace> --from-file=remote-cluster-ca.crt=my-ca.crtConfigure the Elasticsearch resource
Update the Elasticsearch manifest to:
- Load the API key from the previously created secret using
secureSettings - Mount the CA certificate from the previously created ConfigMap as a custom file in the Elasticsearch Pods
- Enable and configure the remote cluster SSL client in the
configsection of eachnodeSet
apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: <local-cluster-name> spec: version: 9.2.4 secureSettings: - secretName: remote-api-keys nodeSets: - name: default count: 3 config: xpack: security: remote_cluster_client: ssl: enabled: true certificate_authorities: [ "remote-certs/remote-cluster-ca.crt" ] podTemplate: spec: containers: - name: elasticsearch volumeMounts: - name: remote-ca mountPath: /usr/share/elasticsearch/config/remote-certs volumes: - name: remote-ca configMap: name: remote-ca- Repeat this configuration for all
nodeSets. - The file name must match the
keyof the ConfigMap that contains the CA certificate. - Must match the name of the ConfigMap created previously.
- Load the API key from the previously created secret using
On the local cluster, add the remote cluster using Kibana or the Elasticsearch API.
This guide uses the proxy connection mode, which is the only practical option when connecting to Elastic Cloud Hosted, Elastic Cloud Enterprise, or Elastic Cloud on Kubernetes clusters from outside their Kubernetes environment.
If the remote cluster is self-managed (or another ECK cluster within the same Kubernetes network) and the local cluster can reach the remote nodes’ publish addresses directly, you can use sniff mode instead. Refer to connection modes documentation for details on each mode and their connectivity requirements.
Go to the Remote Clusters management page in the navigation menu or use the global search field.
Select Add a remote cluster.
In Select connection type, choose the API keys authentication mechanism and click Next.
Set the Remote cluster name: This name must match the
<remote-cluster-name>you configured when adding the API key in the local cluster's keystore.In Connection mode, select Manually enter proxy address and server name to enable the proxy mode and fill in the following fields:
Proxy address: Identify the endpoint of the remote cluster, including the hostname, FQDN, or IP address, and the port:
Obtain the endpoint from the Security page of the ECH deployment you want to use as a remote. Copy the Proxy address from the Remote cluster parameters section, and replace its port with
9443, which is the port used by the remote cluster server interface.Obtain the endpoint from the Security page of the ECE deployment you want to use as a remote. Copy the Proxy address from the Remote cluster parameters, and replace its port with
9443, which is the port used by the remote cluster server interface.The endpoint depends on your network architecture and the selected connection mode (
snifforproxy). It can be one or more Elasticsearch nodes, or a TCP (layer 4) load balancer or reverse proxy in front of the cluster, as long as the local cluster can reach them over port9443.If you are configuring
sniffmode, set the seeds parameter instead of the proxy address. Refer to the connection modes documentation for details and connectivity requirements of each mode.Starting with Kibana 9.2, this field also supports IPv6 addresses. When using an IPv6 address, enclose it in square brackets followed by the port number. For example:
[2001:db8::1]:9443.Server name (optional): Specify a value if the TLS certificate presented by the remote cluster is signed for a different name than the remote address.
Click Next.
In Confirm setup, click Add remote cluster (you have already established trust in a previous step).
To add a remote cluster, use the cluster update settings API. Configure the following fields:
Remote cluster alias: The cluster alias must match the
<remote-cluster-name>you configured when adding the API key in the local cluster's keystore.mode: Use
proxymode in almost all cases.sniffmode is only applicable when the remote cluster is self-managed and the local cluster can reach the nodes’ publish addresses directly.proxy_address: Identify the endpoint of the remote cluster, including the hostname, FQDN, or IP address, and the port. Both IPv4 and IPv6 addresses are supported.
Obtain the endpoint from the Security page of the ECH deployment you want to use as a remote. Copy the Proxy address from the Remote cluster parameters section, and replace its port with
9443, which is the port used by the remote cluster server interface.Obtain the endpoint from the Security page of the ECE deployment you want to use as a remote. Copy the Proxy address from the Remote cluster parameters, and replace its port with
9443, which is the port used by the remote cluster server interface.The endpoint depends on your network architecture and the selected connection mode (
snifforproxy). It can be one or more Elasticsearch nodes, or a TCP (layer 4) load balancer or reverse proxy in front of the cluster, as long as the local cluster can reach them over port9443.If you are configuring
sniffmode, set the seeds parameter instead of the proxy address. Refer to the connection modes documentation for details and connectivity requirements of each mode.When using an IPv6 address, enclose it in square brackets followed by the port number. For example:
[2001:db8::1]:9443.server_name: Specify a value if the certificate presented by the remote cluster is signed for a different name than the proxy_address.
This is an example of the API call to add or update a remote cluster:
PUT /_cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"alias-for-my-remote-cluster": {
"mode":"proxy",
"proxy_address": "<REMOTE_CLUSTER_ADDRESS>:9443",
"server_name": "<REMOTE_CLUSTER_SERVER_NAME>"
}
}
}
}
}
- Align the alias with the remote cluster name used when adding the API key as a secure setting.
For a full list of available client connection settings, refer to the remote cluster settings reference.
From the local cluster, check the status of the connection to the remote cluster. If you encounter issues, refer to the Troubleshooting guide.
GET _remote/info
In the response, verify that connected is true:
{
"<remote-alias>": {
"connected": true,
"mode": "proxy",
"proxy_address": "<REMOTE_CLUSTER_ADDRESS>:9443",
"server_name": "<REMOTE_CLUSTER_SERVER_NAME>",
"num_proxy_sockets_connected": 18,
"max_proxy_socket_connections": 18,
"initial_connect_timeout": "30s",
"skip_unavailable": true,
"cluster_credentials": "::es_redacted::"
}
}
If you're using the API key–based security model for cross-cluster replication or cross-cluster search, you can define user roles with remote indices privileges on the local cluster to further restrict the permissions granted by the API key. For more details, refer to Configure roles and users.