Use Kibana in a production environment

edit

Use Kibana in a production environment

edit

How you deploy Kibana largely depends on your use case. If you are the only user, you can run Kibana on your local machine and configure it to point to whatever Elasticsearch instance you want to interact with. Conversely, if you have a large number of heavy Kibana users, you might need to load balance across multiple Kibana instances that are all connected to the same Elasticsearch instance.

While Kibana isn’t terribly resource intensive, we still recommend running Kibana separate from your Elasticsearch data or master nodes. To distribute Kibana traffic across the nodes in your Elasticsearch cluster, you can configure Kibana to use a list of Elasticsearch hosts.

Use Elastic Stack security features

edit

You can use Elastic Stack security features to control what Elasticsearch data users can access through Kibana.

When security features are enabled, Kibana users have to log in. They need to have a role granting Kibana privileges as well as access to the indices they will be working with in Kibana.

If a user loads a Kibana dashboard that accesses data in an index that they are not authorized to view, they get an error that indicates the index does not exist.

For more information on granting access to Kibana, see Granting access to Kibana.

Require Content Security Policy

edit

Kibana uses a Content Security Policy to help prevent the browser from allowing unsafe scripting, but older browsers will silently ignore this policy. If your organization does not need to support Internet Explorer 11 or much older versions of our other supported browsers, we recommend that you enable Kibana’s strict mode for content security policy, which will block access to Kibana for any browser that does not enforce even a rudimentary set of CSP protections.

To do this, set csp.strict to true in your kibana.yml:

csp.strict: true

Enable SSL

edit

See Encrypt TLS communications in Kibana.

Kibana does not support rolling upgrades, and deploying mixed versions of Kibana can result in data loss or upgrade failures. Please shut down all instances of Kibana before performing an upgrade, and ensure all running Kibana instances have matching versions.

Load balancing across multiple Kibana instances

edit

To serve multiple Kibana installations behind a load balancer, you must change the configuration. See Configuring Kibana for details on each setting.

Settings unique across each Kibana instance:

server.uuid
server.name

Settings unique across each host (for example, running multiple installations on the same virtual machine):

logging.dest
path.data
pid.file
server.port

Settings that must be the same:

xpack.security.encryptionKey //decrypting session information
xpack.reporting.encryptionKey //decrypting reports
xpack.encryptedSavedObjects.encryptionKey // decrypting saved objects
xpack.encryptedSavedObjects.keyRotation.decryptionOnlyKeys // saved objects encryption key rotation, if any

Separate configuration files can be used from the command line by using the -c flag:

bin/kibana -c config/instance1.yml
bin/kibana -c config/instance2.yml

Accessing multiple load-balanced Kibana clusters

edit

To access multiple load-balanced Kibana clusters from the same browser, explicitly set xpack.security.cookieName to the same value in the Kibana configuration of each Kibana instance.

Each Kibana cluster must have a different value of xpack.security.cookieName.

This avoids conflicts between cookies from the different Kibana instances.

This will achieve seamless high availability and keep the session active in case of failure from the currently used instance.

High availability across multiple Elasticsearch nodes

edit

Kibana can be configured to connect to multiple Elasticsearch nodes in the same cluster. In situations where a node becomes unavailable, Kibana will transparently connect to an available node and continue operating. Requests to available hosts will be routed in a round robin fashion.

Currently the Console application is limited to connecting to the first node listed.

In kibana.yml:

elasticsearch.hosts:
  - http://elasticsearch1:9200
  - http://elasticsearch2:9200

Related configurations include elasticsearch.sniffInterval, elasticsearch.sniffOnStart, and elasticsearch.sniffOnConnectionFault. These can be used to automatically update the list of hosts as a cluster is resized. Parameters can be found on the settings page.

Memory

edit

Kibana has a default memory limit that scales based on total memory available. In some scenarios, such as large reporting jobs, it may make sense to tweak limits to meet more specific requirements.

A limit can be defined by setting --max-old-space-size in the node.options config file found inside the kibana/config folder or any other folder configured with the environment variable KBN_PATH_CONF. For example, in the Debian-based system, the folder is /etc/kibana.

The option accepts a limit in MB:

--max-old-space-size=2048