Configuration Properties

edit

Once installed, define the configuration for the hdfs repository through the REST API:

PUT _snapshot/my_hdfs_repository
{
  "type": "hdfs",
  "settings": {
    "uri": "hdfs://namenode:8020/",
    "path": "elasticsearch/respositories/my_hdfs_repository",
    "conf.dfs.client.read.shortcircuit": "true"
  }
}

The following settings are supported:

uri

The uri address for hdfs. ex: "hdfs://<host>:<port>/". (Required)

path

The file path within the filesystem where data is stored/loaded. ex: "path/to/file". (Required)

load_defaults

Whether to load the default Hadoop configuration or not. (Enabled by default)

conf.<key>

Inlined configuration parameter to be added to Hadoop configuration. (Optional) Only client oriented properties from the hadoop core and hdfs configuration files will be recognized by the plugin.

compress

Whether to compress the metadata or not. (Disabled by default)

chunk_size

Override the chunk size. (Disabled by default)

Alternatively, you can define the hdfs repository and its settings in your elasticsearch.yml:

repositories:
  hdfs:
    uri: "hdfs://<host>:<port>/"    \# required - HDFS address only
    path: "some/path"               \# required - path within the file-system where data is stored/loaded
    load_defaults: "true"           \# optional - whether to load the default Hadoop configuration (default) or not
    conf.<key> : "<value>"          \# optional - 'inlined' key=value added to the Hadoop configuration
    compress: "false"               \# optional - whether to compress the metadata or not (default)
    chunk_size: "10mb"              \# optional - chunk size (disabled by default)