Appendix 5. Tribe Node

edit

Shield supports the Tribe Node, which acts as a federated client across multiple clusters. When using Tribe Node with Shield, you must have the same Shield configurations (users, roles, user-role mappings, SSL/TLS CA) on each cluster, and on the Tribe Node itself, where security checking is primarily done. This, of course, also means that all clusters must be running Shield. The following are the current limitations to keep in mind when using the Tribe Node in combination with Shield.

Same privileges on all connected clusters

edit

The Tribe Node has its own configuration and privileges, which need to grant access to actions and indices on all of the connected clusters. Also, each cluster needs to grant access to indices belonging to other connected clusters as well.

Let’s look at an example: assuming we have two clusters, cluster1 and cluster2, each one holding an index, index1 and index2. A search request that targets multiple clusters, as follows

curl -XGET tribe_node:9200/index1,index2/_search -u tribe_user:tribe_user

requires search privileges for both index1 and index2 on the Tribe Node:

tribe_user:
  indices:
    'index*': search

Also, the same privileges need to be granted on the connected clusters, meaning that cluster1 has to grant access to index2 even though index2 only exists on cluster2; the same requirement applies for index1 on cluster2. This applies to any indices action. As for cluster state read operations (e.g. cluster state api, get mapping api etc.), they always get executed locally on the Tribe Node, to make sure that the merged cluster state gets returned; their privileges are then required on the Tribe Node only.

Same system key on all clusters

edit

In order for message authentication to properly work across multiple clusters, the Tribe Node and all of the connected clusters need to share the same system key.

Encrypted communication

edit

Encrypted communication via SSL can only be enabled globally, meaning that either all of the connected clusters and the Tribe Node have SSL enabled, or none of them have.

Same certification authority on all clusters

edit

When using encrypted communication, for simplicity, we recommend all of the connected clusters and the Tribe Node use the same certification authority to generate their certificates.

Example

edit

Let’s see a complete example on how to use the Tribe Node with shield and the configuration required. First of all the Shield and License plugins need to be installed and enabled on all clusters and on the Tribe Node.

The system key needs to be generated on one node, as described in the Getting Started section, and then copied over to all of the other nodes in each cluster and the Tribe Node itself.

Each cluster can have its own users with admin privileges that don’t need to be present in the Tribe Node too. In fact, administration tasks (e.g. create index) cannot be performed through the Tribe Node but need to be sent directly to the corresponding cluster. The users that need to be created on Tribe Node are those that allow to get back data merged from the different clusters through the Tribe Node itself. Let’s for instance create as follows a tribe_user user, with role user, that has read privileges on any index.

./bin/shield/esusers useradd tribe_user -p tribe_user -r user

The above command needs to be executed on each cluster, since the same user needs to be present on the Tribe Node as well as on every connected cluster.

The following is the configuration required on the Tribe Node, that needs to be added to elasticsearch.yml. Elasticsearch allows to list specific settings per cluster. We disable multicast discovery as described in the Disable Multicast section and configure the proper unicast discovery hosts for each cluster, as well as their cluster names:

tribe:
  t1:
    cluster.name: tribe1
    discovery.zen.ping.multicast.enabled: false
    discovery.zen.ping.unicast.hosts: ["tribe1:9300"]
  t2:
    cluster.name: tribe2
    discovery.zen.ping.multicast.enabled: false
    discovery.zen.ping.unicast.hosts: ["tribe2:9300"]

The Tribe Node can then be started and once initialized it will be ready to accept requests like the following search, which will return documents coming from the different connected clusters:

curl -XGET localhost:9200/_search -u tribe_user:tribe_user

As for encrypted communication, the required settings are the same as described in Securing Nodes, but need to be specified per tribe as we did for discovery settings above.