Quickstart
editQuickstart
editWith Elastic Cloud on Kubernetes (ECK) you can extend the basic Kubernetes orchestration capabilities to easily deploy, secure, upgrade your Elasticsearch cluster, and much more.
Eager to get started? This fast guide shows you how to:
Requirements
Make sure that you have kubectl version 1.11+ installed.
Deploy ECK in your Kubernetes cluster
editIf you are using GKE, make sure your user has cluster-admin
permissions. For more information, see Prerequisites for using Kubernetes RBAC on GKE.
-
Install custom resource definitions and the operator with its RBAC rules:
kubectl apply -f https://download.elastic.co/downloads/eck/0.8.1/all-in-one.yaml
-
Monitor the operator logs:
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
Deploy the Elasticsearch cluster
editApply a simple Elasticsearch cluster specification, with one node:
The default resource request is 2GB memory and 100m CPU. Your pod will be Pending
if your cluster does not have enough resources.
cat <<EOF | kubectl apply -f - apiVersion: elasticsearch.k8s.elastic.co/v1alpha1 kind: Elasticsearch metadata: name: quickstart spec: version: 7.1.0 nodes: - nodeCount: 1 config: node.master: true node.data: true node.ingest: true EOF
The operator automatically manages pods and resources corresponding to the desired cluster. It may take up to a few minutes until the cluster is ready.
Monitor cluster health and creation progress
editGet an overview of the current Elasticsearch clusters in the Kubernetes cluster, including health, version and number of nodes:
kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE quickstart green 1 7.1.0 Operational 1m
When you create the cluster, there is no HEALTH
status and the PHASE
is Pending
. After a while, the PHASE
turns into Operational
, and HEALTH
becomes green
.
You can see that one pod is in the process of being started:
kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'
NAME READY STATUS RESTARTS AGE quickstart-es-5zctxpn8nd 1/1 Running 0 1m
Access the logs for that Pod:
kubectl logs -f quickstart-es-5zctxpn8nd
Request Elasticsearch access
editA ClusterIP Service is automatically created for your cluster:
kubectl get service quickstart-es
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quickstart-es ClusterIP 10.15.251.145 <none> 9200/TCP 34m
-
Get access to Elasticsearch.
From the Kubernetes cluster, use this URL:
https://quickstart-es:9200
From your local workstation, use the following command:
kubectl port-forward service/quickstart-es 9200
-
Get the credentials and request the Elasticsearch endpoint.
A default user named
elastic
is automatically created. Its password is stored as a Kubernetes secret:PASSWORD=$(kubectl get secret quickstart-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
-
Request the Elasticsearch endpoint:
curl -u "elastic:$PASSWORD" -k "https://localhost:9200"
For testing purposes only, you can specify the option -k
to turn off certificate verification.
{ "name" : "quickstart-es-r56c9dzzcr", "cluster_name" : "quickstart", "cluster_uuid" : "XqWg0xIiRmmEBg4NMhnYPg", "version" : { "number" : "7.1.0", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "04116c9", "build_date" : "2019-05-08T06:20:03.781729Z", "build_snapshot" : true, "lucene_version" : "8.0.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
Deploy the Kibana instance
editTo deploy your Kibana instance go through the following steps.
-
Specify a Kibana instance and associate it with your Elasticsearch cluster:
cat <<EOF | kubectl apply -f - apiVersion: kibana.k8s.elastic.co/v1alpha1 kind: Kibana metadata: name: quickstart spec: version: 7.1.0 nodeCount: 1 elasticsearchRef: name: quickstart EOF
-
Monitor Kibana health and creation progress.
Similarly to Elasticsearch, you can retrieve details about Kibana instances:
kubectl get kibana
And the associated Pods:
kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'
-
Access Kibana.
A
ClusterIP
Service is automatically created for Kibana:kubectl get service quickstart-kibana
Use
kubectl port-forward
to access Kibana from your local workstation:kubectl port-forward service/quickstart-kibana 5601
Open
http://localhost:5601
in your browser.Login with the
elastic
user. Retrieve its password with:echo $(kubectl get secret quickstart-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
Upgrade your deployment
editYou can apply any modification to the original cluster specification. The operator makes sure that your changes are applied to the existing cluster, while avoiding downtime.
For example, you can grow the cluster to three nodes:
cat <<EOF | kubectl apply -f - apiVersion: elasticsearch.k8s.elastic.co/v1alpha1 kind: Elasticsearch metadata: name: quickstart spec: version: 7.1.0 nodes: - nodeCount: 3 config: node.master: true node.data: true node.ingest: true EOF
Use persistent storage
editThe cluster that you deployed in this quickstart uses an emptyDir volume, which might not qualify for production workloads.
You can request a PersistentVolumeClaim
in the cluster specification, to target any PersistentVolume
class available in your Kubernetes cluster:
cat <<EOF | kubectl apply -f - apiVersion: elasticsearch.k8s.elastic.co/v1alpha1 kind: Elasticsearch metadata: name: quickstart spec: version: 7.1.0 nodes: - nodeCount: 3 config: node.master: true node.data: true node.ingest: true volumeClaimTemplates: - metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi #storageClassName: standard # can be any available storage class EOF
To aim for the best performance, the operator supports persistent volumes local to each node. For more details, see:
- persistent volumes storage classes
- elastic local volume dynamic provisioner to setup dynamic local volumes based on LVM.
- kubernetes-sigs local volume static provisioner to setup static local volumes.
Check out the samples
editYou can find a set of sample resources in the project repository. To customize the Elasticsearch resource, check the Elasticsearch sample.
For a full description of each CustomResourceDefinition
, go to the project repository.
You can also retrieve it from the cluster. For example, describe the Elasticsearch CRD specification with:
kubectl describe crd elasticsearch