WARNING: Version 0.90 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
NGram Tokenizer
editNGram Tokenizer
editA tokenizer of type nGram
.
The following are settings that can be set for a nGram
tokenizer type:
Setting | Description | Default value |
---|---|---|
|
Minimum size in codepoints of a single n-gram |
|
|
Maximum size in codepoints of a single n-gram |
|
|
(Since |
|
token_chars
accepts the following character classes:
|
for example |
|
for example |
|
for example |
|
for example |
|
for example |
Example
editcurl -XPUT 'localhost:9200/test' -d ' { "settings" : { "analysis" : { "analyzer" : { "my_ngram_analyzer" : { "tokenizer" : "my_ngram_tokenizer" } }, "tokenizer" : { "my_ngram_tokenizer" : { "type" : "nGram", "min_gram" : "2", "max_gram" : "3", "token_chars": [ "letter", "digit" ] } } } } }' curl 'localhost:9200/test/_analyze?pretty=1&analyzer=my_ngram_analyzer' -d 'FC Schalke 04' # FC, Sc, Sch, ch, cha, ha, hal, al, alk, lk, lke, ke, 04