Configuration Properties
editConfiguration Properties
editOnce installed, define the configuration for the hdfs
repository through the
REST API:
PUT _snapshot/my_hdfs_repository { "type": "hdfs", "settings": { "uri": "hdfs://namenode:8020/", "path": "elasticsearch/respositories/my_hdfs_repository", "conf.dfs.client.read.shortcircuit": "true" } }
The following settings are supported:
|
The uri address for hdfs. ex: "hdfs://<host>:<port>/". (Required) |
|
The file path within the filesystem where data is stored/loaded. ex: "path/to/file". (Required) |
|
Whether to load the default Hadoop configuration or not. (Enabled by default) |
|
Inlined configuration parameter to be added to Hadoop configuration. (Optional) Only client oriented properties from the hadoop core and hdfs configuration files will be recognized by the plugin. |
|
Whether to compress the metadata or not. (Disabled by default) |
|
Override the chunk size. (Disabled by default) |
Alternatively, you can define the hdfs
repository and its settings in your elasticsearch.yml
:
repositories: hdfs: uri: "hdfs://<host>:<port>/" \# required - HDFS address only path: "some/path" \# required - path within the file-system where data is stored/loaded load_defaults: "true" \# optional - whether to load the default Hadoop configuration (default) or not conf.<key> : "<value>" \# optional - 'inlined' key=value added to the Hadoop configuration compress: "false" \# optional - whether to compress the metadata or not (default) chunk_size: "10mb" \# optional - chunk size (disabled by default)