Running Enterprise Search using Docker
editRunning Enterprise Search using Docker
editAs an alternative to the native installation method, you can run Enterprise Search in a Docker container. This is useful for running the solution in development and test environments or in production, when combined with an orchestration tool like Docker Compose or Kubernetes.
For instructions for running the solution in Kubernetes, please check out the dedicated guide for running Enterprise Search with ECK.
Docker images
editThe Elastic Docker registry provides Docker images for Enterprise Search. The images support both x86 and ARM platforms.
You can download images from the registry, or use docker pull
:
docker pull docker.elastic.co/enterprise-search/enterprise-search:8.2.3
Configuration
editWhen running in Docker, you will use environment variables to configure Enterprise Search, using fully-qualified setting names as environment variables. See Configuration for a list of configurable values.
You must configure the values that are required for a
standard installation method.
In most cases, these are allow_es_settings_modification
and secret_management.encryption_keys
.
Run Enterprise Search using docker run
editUse docker run
to manage Elastic containers imperatively.
Enterprise Search depends on Elasticsearch and Kibana. The following steps describe how to start all three services.
This configuration is appropriate for development and testing.
-
Allocate at least 4GB of memory to Docker Engine:
If you are using Docker Desktop with default settings, you need to increase the memory allocated to Docker Engine. Refer to the following documentation from Docker:
-
Create a Docker network:
docker network create elastic
-
Create and start the Elasticsearch container interactively:
docker run \ --name "elasticsearch" \ --network "elastic" \ --publish "9200:9200" \ --volume "es-config:/usr/share/elasticsearch/config:rw" \ --interactive \ --tty \ --rm \ "docker.elastic.co/staging/elasticsearch:8.2.3"
The
--volume
argument mounts a volume within the container. Elasticsearch will write a certificate file to this volume on startup. The Enterprise Search container will mount the same volume in order to read the certificate file.To restart Elasticsearch later, you must first delete the volume so Elasticsearch can start with a fresh configuration:
docker volume rm es-config
If the container fails to start with a message about
vm.max_map_count
, refer to the following Elasticsearch documentation for platform-specific solutions: Using the Docker images in production. -
Save password, enrollment token, and Elasticsearch address:
Within the Elasticsearch terminal output, locate the password for user
elastic
, and the enrollment token for Kibana.These are printed the first time Elasticsearch starts. The relevant output looks like this:
-> Elasticsearch security features have been automatically configured! -> Authentication is enabled and cluster connections are encrypted. -> Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`): ksk4kIAt0tQBWZ9qYz0p -> HTTP CA certificate SHA-256 fingerprint: 39e8552724167da188f7d1c4196e6335d32d0cd62115b34dd271f0519232bb7d -> Configure Kibana to use this cluster: * Run Kibana and click the configuration link in the terminal when Kibana starts. * Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes): eyJ2ZXIiOiI4LjAuMCIsImFkciI6WyIxNzIuMTguMC4yOjkyMDAiXSwiZmdyIjoiMzllODU1MjcyNDE2N2RhMTg4ZjdkMWM0MTk2ZTYzMzVkMzJkMGNkNjIxMTViMzRkZDI3MWYwNTE5MjMyYmI3ZCIsImtleSI6IkpVMGJ3SDRCTDJqOWx6TVlTLWpyOlVoeHBSa1JFUXVlbnV3UEVGTUlub1EifQ==
Also within the Elasticsearch output, locate the address at which Elasticsearch is running within the container. This may be easier to find by searching the logs for
publish_address
. For example, run the following in a separate terminal:docker logs elasticsearch | grep 'publish_address'
The relevant information looks like this:
"message":"publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}"
Save the password, enrollment token, and Elasticsearch publish address for use in the following steps.
-
Create, start, and enroll Kibana:
docker run \ --name "kibana" \ --network "elastic" \ --publish "5601:5601" \ --interactive \ --tty \ --rm \ --env "ENTERPRISESEARCH_HOST=http://enterprise-search:3002" \ "docker.elastic.co/staging/kibana:8.2.3"
Open the link printed to the terminal to navigate to Kibana (http://localhost:5601?code=). Follow the instructions within Kibana to complete the enrollment process. Use the enrollment token from step 4.
When you see the login screen, move to the next step.
-
Create and start the Enterprise Search container:
Use the following command template:
docker run \ --name "enterprise-search" \ --network "elastic" \ --publish "3002:3002" \ --volume "es-config:/usr/share/enterprise-search/es-config:ro" \ --interactive \ --tty \ --rm \ --env "secret_management.encryption_keys=[${ENCRYPTION_KEYS}]" \ --env "allow_es_settings_modification=true" \ --env "elasticsearch.host=https://${ELASTICSEARCH_ADDRESS}" \ --env "elasticsearch.username=elastic" \ --env "elasticsearch.password=${ELASTIC_PASSWORD}" \ --env "elasticsearch.ssl.enabled=true" \ --env "elasticsearch.ssl.certificate_authority=/usr/share/enterprise-search/es-config/certs/http_ca.crt" \ --env "kibana.external_url=http://kibana:5601" \ "docker.elastic.co/staging/enterprise-search:8.2.3"
Replace
${ENCRYPTION_KEYS}
with at least one encryption key (256-bit key recommended). For example:secret_management.encryption_keys=['q3t6w9z$C&F)J@McQfTjWnZr4u7x!A%D']
Replace
${ELASTICSEARCH_ADDRESS}
with the Elasticsearch address from step 4. For example:elasticsearch.host=https://172.18.0.2:9200
.Replace
${ELASTIC_PASSWORD}
with the password from step 4. For example:elasticsearch.password=ksk4kIAt0tQBWZ9qYz0p
.Complete example:
docker run \ --name "enterprise-search" \ --network "elastic" \ --publish "3002:3002" \ --volume "es-config:/usr/share/enterprise-search/es-config:ro" \ --interactive \ --tty \ --rm \ --env "secret_management.encryption_keys=['q3t6w9z$C&F)J@McQfTjWnZr4u7x!A%D']" \ --env "allow_es_settings_modification=true" \ --env "elasticsearch.host=https://172.18.0.2:9200" \ --env "elasticsearch.username=elastic" \ --env "elasticsearch.password=ksk4kIAt0tQBWZ9qYz0p" \ --env "elasticsearch.ssl.enabled=true" \ --env "elasticsearch.ssl.certificate_authority=/usr/share/enterprise-search/es-config/certs/http_ca.crt" \ --env "kibana.external_url=http://kibana:5601" \ "docker.elastic.co/staging/enterprise-search:8.2.3"
-
Log in:
Open Enterprise Search in Kibana at http://localhost:5601/app/enterprise_search/overview. Log in as user
elastic
. Use the password for this user from step 4.
Run Enterprise Search using docker-compose
editUsing Docker Compose is a much more convenient way of running the solution in a container. This method is often used in local development environments to try out the product before a full production deployment.
This example runs Enterprise Search with Elasticsearch and Kibana in Docker Compose.
-
Create the following configuration files in a new, empty directory:
-
Create a
.env
file to set environment variables which are used to run thedocker-compose.yml
configuration file. Include these environment variables:STACK_VERSION=8.2.3 ELASTIC_PASSWORD=changeme KIBANA_PASSWORD=changeme ES_PORT=9200 CLUSTER_NAME=es-cluster LICENSE=basic MEM_LIMIT=1073741824 KIBANA_PORT=5601 ENTERPRISE_SEARCH_PORT=3002 ENCRYPTION_KEYS=secret
Ensure that you specify a strong password for the
elastic
andkibana_system
users with theELASTIC_PASSWORD
andKIBANA_PASSWORD
variables. These variables are referenced by thedocker-compose.yml
file. -
Create a
docker-compose.yml
file:version: "2.2" services: setup: image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} volumes: - certs:/usr/share/elasticsearch/config/certs user: "0" command: > bash -c ' if [ x${ELASTIC_PASSWORD} == x ]; then echo "Set the ELASTIC_PASSWORD environment variable in the .env file"; exit 1; elif [ x${KIBANA_PASSWORD} == x ]; then echo "Set the KIBANA_PASSWORD environment variable in the .env file"; exit 1; fi; if [ ! -f certs/ca.zip ]; then echo "Creating CA"; bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip; unzip config/certs/ca.zip -d config/certs; fi; if [ ! -f certs/certs.zip ]; then echo "Creating certs"; echo -ne \ "instances:\n"\ " - name: es01\n"\ " dns:\n"\ " - es01\n"\ " - localhost\n"\ " ip:\n"\ " - 127.0.0.1\n"\ > config/certs/instances.yml; bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key; unzip config/certs/certs.zip -d config/certs; fi; echo "Setting file permissions" chown -R root:root config/certs; find . -type d -exec chmod 750 \{\} \;; find . -type f -exec chmod 640 \{\} \;; echo "Waiting for Elasticsearch availability"; until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done; echo "Setting kibana_system password"; until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done; echo "All done!"; ' healthcheck: test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"] interval: 1s timeout: 5s retries: 120 es01: depends_on: setup: condition: service_healthy image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} volumes: - certs:/usr/share/elasticsearch/config/certs - esdata01:/usr/share/elasticsearch/data ports: - ${ES_PORT}:9200 environment: - node.name=es01 - cluster.name=${CLUSTER_NAME} - cluster.initial_master_nodes=es01 - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} - bootstrap.memory_lock=true - xpack.security.enabled=true - xpack.security.http.ssl.enabled=true - xpack.security.http.ssl.key=certs/es01/es01.key - xpack.security.http.ssl.certificate=certs/es01/es01.crt - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.http.ssl.verification_mode=certificate - xpack.security.transport.ssl.enabled=true - xpack.security.transport.ssl.key=certs/es01/es01.key - xpack.security.transport.ssl.certificate=certs/es01/es01.crt - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.transport.ssl.verification_mode=certificate - xpack.license.self_generated.type=${LICENSE} mem_limit: ${MEM_LIMIT} ulimits: memlock: soft: -1 hard: -1 healthcheck: test: [ "CMD-SHELL", "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", ] interval: 10s timeout: 10s retries: 120 kibana: depends_on: es01: condition: service_healthy image: docker.elastic.co/kibana/kibana:${STACK_VERSION} volumes: - certs:/usr/share/kibana/config/certs - kibanadata:/usr/share/kibana/data ports: - ${KIBANA_PORT}:5601 environment: - SERVERNAME=kibana - ELASTICSEARCH_HOSTS=https://es01:9200 - ELASTICSEARCH_USERNAME=kibana_system - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD} - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt - ENTERPRISESEARCH_HOST=http://enterprisesearch:${ENTERPRISE_SEARCH_PORT} mem_limit: ${MEM_LIMIT} healthcheck: test: [ "CMD-SHELL", "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'", ] interval: 10s timeout: 10s retries: 120 enterprisesearch: depends_on: es01: condition: service_healthy kibana: condition: service_healthy image: docker.elastic.co/enterprise-search/enterprise-search:${STACK_VERSION} volumes: - certs:/usr/share/enterprise-search/config/certs - enterprisesearchdata:/usr/share/enterprise-search/config ports: - ${ENTERPRISE_SEARCH_PORT}:3002 environment: - SERVERNAME=enterprisesearch - secret_management.encryption_keys=[${ENCRYPTION_KEYS}] - allow_es_settings_modification=true - elasticsearch.host=https://es01:9200 - elasticsearch.username=elastic - elasticsearch.password=${ELASTIC_PASSWORD} - elasticsearch.ssl.enabled=true - elasticsearch.ssl.certificate_authority=/usr/share/enterprise-search/config/certs/ca/ca.crt - kibana.external_url=http://kibana:5601 mem_limit: ${MEM_LIMIT} healthcheck: test: [ "CMD-SHELL", "curl -s -I http://localhost:3002 | grep -q 'HTTP/1.1 302 Found'", ] interval: 10s timeout: 10s retries: 120 volumes: certs: driver: local enterprisesearchdata: driver: local esdata01: driver: local kibanadata: driver: local
This sample Docker Compose file brings up a single-node Elasticsearch cluster, then starts an Enterprise Search instance on it and configures a Kibana instance as the main way of interacting with the solution.
All components running in Docker compose are attached to a dedicated Docker network called
elastic
and are exposed via a set of local ports accessible only from the local machine. If you want to open up the service to other computers on your network, you will need to change port mappings for the services you want to share (e.g. change127.0.0.1:5601:5601
to5601:5601
for Kibana).The
--volume
argument mounts a volume within the container. Elasticsearch will write a certificate file to this volume on startup. The Enterprise Search container will mount the same volume in order to read the certificate file.The data in the Docker volumes is preserved and loaded when you restart the cluster with
docker-compose up
. To restart Elasticsearch later, you must first delete the volume so Elasticsearch can start with a fresh configuration:docker volume rm es-config
-
-
Make sure Docker Engine is allotted at least 4GiB of memory. In Docker Desktop, configure resource usage using the Advanced tab in Preferences (macOS) or Settings (Windows).
Docker Compose is not pre-installed with Docker on Linux. See docs.docker.com for installation instructions: Install Compose on Linux
-
Run
docker-compose up
to bring up the cluster:docker-compose up --remove-orphans
-
If the solution starts without errors, your deployment is ready to use:
Access Kibana at http://localhost:5601. Log in with user
elastic
. The password is the value you provided forELASTIC_PASSWORD
in your.env
file.Access Elasticsearch at http://localhost:9200.
Alternatively, if the solution does not start successfully, check the container logs for more information.
Try:
docker logs <container_id>
, wherecontainer_id
is the ID of the unhealthy container (for examplef6de943335cf
).If the container fails to start with a message about
vm.max_map_count
, refer to the following Elasticsearch documentation for platform-specific solutions: Using the Docker images in production.
To stop the cluster, run docker-compose down
or press Ctrl+C in your terminal.
To delete the data volumes when you bring down the cluster, specify the -v
option: docker-compose down -v
.