- Plugins and Integrations: other versions:
- Introduction to plugins
- Plugin management
- API extension plugins
- Analysis plugins
- ICU analysis plugin
- Japanese (kuromoji) analysis plugin
kuromoji
analyzerkuromoji_iteration_mark
character filterkuromoji_tokenizer
kuromoji_baseform
token filterkuromoji_part_of_speech
token filterkuromoji_readingform
token filterkuromoji_stemmer
token filterja_stop
token filterkuromoji_number
token filterhiragana_uppercase
token filterkatakana_uppercase
token filterkuromoji_completion
token filter
- Korean (nori) analysis plugin
- Phonetic analysis plugin
- Smart Chinese analysis plugin
- Stempel Polish analysis plugin
- Ukrainian analysis plugin
- Discovery plugins
- Mapper plugins
- Snapshot/restore repository plugins
- Store plugins
- Integrations
- Creating an Elasticsearch plugin
Configuration properties
editConfiguration properties
editOnce installed, define the configuration for the hdfs
repository through the
REST API:
PUT _snapshot/my_hdfs_repository { "type": "hdfs", "settings": { "uri": "hdfs://namenode:8020/", "path": "elasticsearch/repositories/my_hdfs_repository", "conf.dfs.client.read.shortcircuit": "true" } }
The following settings are supported:
|
The uri address for hdfs. ex: "hdfs://<host>:<port>/". (Required) |
|
The file path within the filesystem where data is stored/loaded. ex: "path/to/file". (Required) |
|
Whether to load the default Hadoop configuration or not. (Enabled by default) |
|
Inlined configuration parameter to be added to Hadoop configuration. (Optional) Only client oriented properties from the hadoop core and hdfs configuration files will be recognized by the plugin. |
|
Whether to compress the metadata or not. (Enabled by default) |
|
Throttles per node restore rate. Defaults to unlimited. Note that restores are also throttled through recovery settings. |
|
Throttles per node snapshot rate. Defaults to |
|
Makes repository read-only. Defaults to |
|
Override the chunk size. (Disabled by default) |
|
Kerberos principal to use when connecting to a secured HDFS cluster.
If you are using a service principal for your elasticsearch node, you may
use the |
|
The replication factor for all new HDFS files created by this repository.
Must be greater or equal to |
A note on HDFS availability
editWhen you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then all nodes in the cluster must be able to reach HDFS when starting. If not, then the node will fail to initialize the repository at start up and the repository will be unusable. If this happens, you will need to remove and re-add the repository or restart the offending node.
On this page