How to

Introducing beta releases of Elasticsearch and Kibana Docker images!

Introducing Docker images for Elasticsearch and Kibana

Beta releases of the Elasticsearch and Kibana Docker images are here! They track the latest versions of Elasticsearch and Kibana 5.0 and are pre-installed with our awesome x-pack extension! The images are hosted on Elastic's own Docker Registry.

Instructions can be found on the elasticsearch-docker and kibana-docker GitHub pages, but let's see how easy it is to launch an Elasticsearch + Kibana stack with them.

First, ensure that:

  1. You have Docker Engine installed.
  2. Your host meets the prerequisites.
  3. If you are on Linux, that docker-compose is installed.

Download an example docker-compose.yml definition1 to help us bring up Elasticsearch and Kibana, then issue:

$ docker-compose up

NOTE: the above command assumes that you don't already have Elasticsearch and Kibana listening on the default ports. Feel free to adjust the ports in the docker-compose.yml file.

You will see logs from Kibana and Elasticsearch racing on your screen.

To access Kibana, visit http://localhost:5601 and you will be greeted by the Kibana 5.0 login page!

kibana_5.0_login_page

Since X-Pack is pre-installed, you can use the default credentials, (login: elastic, password: changeme) but you are strongly advised to change those via the Management menu in Kibana.

Now let's add some data.

Using curl we will create an index called users containing data for two users:

curl -u elastic -XPOST 'localhost:9200/users/1' -d '{"first_name": "John","last_name": "Doe","email_address": "[email protected]"}'
curl -u elastic -XPOST 'localhost:9200/users/2' -d '{"first_name":"James","last_name":"Kirk","email_address":"[email protected]"}'

Back to the Kibana UI, let's configure an index pattern for the data. Enter users for the index name and untick Index contains time-based events, as shown below:

kibana_5.0_configure_index

Click "create" and now you can view the inserted data in the Kibana Discover page:

kibana_5.0_discover_page_data

Terminating your containers is as simple as docker-compose down; this will not destroy the Elasticsearch data volume so if you docker-compose up again your data will still be present. To terminate the containers and the data volume use docker-compose down -v instead.

The images are in beta, and we do not advise you to run beta software in production. However there are a number of best practices that we have compiled on the GitHub page.

We welcome issues and PRs. Stay tuned for the full-release version!

1 Example docker-compose.yml:

---
version: '2'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana
    links:
      - elasticsearch
    ports:
      - 5601:5601

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch
    cap_add:
      - IPC_LOCK
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

volumes:
  esdata1:
    driver: local