Installing Shield

edit

The Getting Started Guide steps through a basic Shield installation. This section provides some additional information about the installation prerequisites, deployment options, and the installation process for RPM/DEB package installations.

The Shield plugin must be installed on every node in the cluster and every node must be restarted after installation. Plan for a complete cluster restart before beginning the installation process.

Shield Installation Prerequisites

edit
  • Java 7 or later
  • Elasticsearch 1.5 or later
  • Elasticsearch License plugin

For information about installing the latest Oracle JDK, see Java SE Downloads. For information about installing Elasticsearch, see Installation in the Elasticsearch Reference.

Installing Shield on a DEB/RPM Package Installation

edit

If you use the DEB/RPM packages to install Elasticsearch, by default Elasticsearch is installed in /usr/share/elasticsearch and the configuration files are stored in /etc/elasticsearch. (For the complete list of default paths, see Directory Layout in the Elasticsearch Reference.)

To install the Shield and License plugins on a DEB/RPM package installation, you need to run bin/plugin -i from the /usr/share/elasticsearch directory with superuser permissions, and specify the location of the configuration files by setting -Des.path.conf. For example:

cd /usr/share/elasticsearch
sudo bin/plugin -i elasticsearch/license/1.0.0 -Des.path.conf=/etc/elasticsearch
sudo bin/plugin -i elasticsearch/shield/1.3.3 -Des.path.conf=/etc/elasticsearch

If you are using a version of Shield prior to 1.3, you also need to specify the location of the configuration files when running esusers and syskeygen.

Installing Shield on Offline Machines

edit

Elasticsearch’s bin/plugin script requires direct Internet access to download and install the License and Shield plugins. If your server doesn’t have Internet access, you can manually download and install the plugins.

To install Shield on a machine that doesn’t have Internet access:

  1. Manually download the appropriate License and Shield binaries:

  2. Transfer the zip files to the offline machine.
  3. Run bin/plugin with the -u option to install the plugins using the zip files. For example:

    bin/plugin -i license -u file:///path/to/file/license-latest.zip 
    bin/plugin -i shield -u file:///path/to/file/shield-1.3.3.zip

    Note that you must specify an absolute path to the zip file after the file:// protocol.

Installing Shield on Tribe Nodes

edit

Shield supports the Tribe Node, which acts as a federated client across multiple clusters. When using Tribe Nodes with Shield, you must have the same Shield configuration (users, roles, user-role mappings, SSL/TLS CA) on each cluster, and on the Tribe Node itself, where security checking is primarily done. This, of course, also means that all clusters must be running Shield.

To use a Tribe Node with Shield:

  1. Configure the same privileges on all connected clusters. The Tribe Node has its own configuration and privileges, which need to grant access to actions and indices on all of the connected clusters. Also, each cluster needs to grant access to indices belonging to other connected clusters as well.

    Let’s look at an example: assuming we have two clusters, cluster1 and cluster2, each one holding an index, index1 and index2. A search request that targets multiple clusters, as follows

    curl -XGET tribe_node:9200/index1,index2/_search -u tribe_user:tribe_user

    requires search privileges for both index1 and index2 on the Tribe Node:

    tribe_user:
      indices:
        'index*': search

    Also, the same privileges need to be granted on the connected clusters, meaning that cluster1 has to grant access to index2 even though index2 only exists on cluster2; the same requirement applies for index1 on cluster2. This applies to any indices action. As for cluster state read operations (e.g. cluster state api, get mapping api etc.), they always get executed locally on the Tribe Node, to make sure that the merged cluster state gets returned; their privileges are then required on the Tribe Node only.

  2. Use the same system key on all clusters. For message authentication to properly work across multiple clusters, the Tribe Node and all of the connected clusters need to share the same system key.
  3. Enable encryption globally. Encrypted communication via SSL/TLS can only be enabled globally, meaning that either all of the connected clusters and the Tribe Node have SSL enabled, or none of them have.
  4. Use the same certification authority on all clusters. When using encrypted communication, for simplicity, we recommend all of the connected clusters and the Tribe Node use the same certification authority to generate their certificates.

Tribe Node Example

edit

Let’s see a complete example on how to use the Tribe Node with shield and the configuration required. First of all the Shield and License plugins need to be installed and enabled on all clusters and on the Tribe Node.

The system key needs to be generated on one node, as described in Enabling Message Authentication, and then copied over to all of the other nodes in each cluster and the Tribe Node itself.

Each cluster can have its own users with admin privileges that don’t need to be present in the Tribe Node too. In fact, administration tasks (e.g. create index) cannot be performed through the Tribe Node but need to be sent directly to the corresponding cluster. The users that need to be created on Tribe Node are those that allow to get back data merged from the different clusters through the Tribe Node itself. Let’s for instance create as follows a tribe_user user, with role user, that has read privileges on any index.

./bin/shield/esusers useradd tribe_user -p tribe_user -r user

The above command needs to be executed on each cluster, since the same user needs to be present on the Tribe Node as well as on every connected cluster.

The following is the configuration required on the Tribe Node, that needs to be added to elasticsearch.yml. Elasticsearch allows to list specific settings per cluster. We disable multicast discovery and configure the proper unicast discovery hosts for each cluster, as well as their cluster names:

tribe:
  t1:
    cluster.name: tribe1
    discovery.zen.ping.multicast.enabled: false
    discovery.zen.ping.unicast.hosts: ["tribe1:9300"]
  t2:
    cluster.name: tribe2
    discovery.zen.ping.multicast.enabled: false
    discovery.zen.ping.unicast.hosts: ["tribe2:9300"]

The Tribe Node can then be started and once initialized it will be ready to accept requests like the following search, which will return documents coming from the different connected clusters:

curl -XGET localhost:9200/_search -u tribe_user:tribe_user

As for encrypted communication, the required settings are the same as described in Securing Communications with Encryption and IP Filtering, but need to be specified per tribe as we did for discovery settings above.