Limitations of the max_gram parameter
editLimitations of the max_gram parameter
editThe edge_ngram tokenizer’s max_gram value limits the character length of
tokens. When the edge_ngram tokenizer is used with an index analyzer, this
means search terms longer than the max_gram length may not match any indexed
terms.
For example, if the max_gram is 3, searches for apple won’t match the
indexed term app.
To account for this, you can use the
truncate token filter with a search analyzer
to shorten search terms to the max_gram character length. However, this could
return irrelevant results.
For example, if the max_gram is 3 and search terms are truncated to three
characters, the search term apple is shortened to app. This means searches
for apple return any indexed terms matching app, such as apply, snapped,
and apple.
We recommend testing both approaches to see which best fits your use case and desired search experience.
Example configuration
editIn this example, we configure the edge_ngram tokenizer to treat letters and
digits as tokens, and to produce grams with minimum length 2 and maximum
length 10:
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter",
"digit"
]
}
}
}
}
}
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "2 Quick Foxes."
}
The above example produces the following terms:
[ Qu, Qui, Quic, Quick, Fo, Fox, Foxe, Foxes ]
Usually we recommend using the same analyzer at index time and at search
time. In the case of the edge_ngram tokenizer, the advice is different. It
only makes sense to use the edge_ngram tokenizer at index time, to ensure
that partial words are available for matching in the index. At search time,
just search for the terms the user has typed in, for instance: Quick Fo.
Below is an example of how to set up a field for search-as-you-type.
Note that the max_gram value for the index analyzer is 10, which limits
indexed terms to 10 characters. Search terms are not truncated, meaning that
search terms longer than 10 characters may not match any indexed terms.
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"autocomplete": {
"tokenizer": "autocomplete",
"filter": [
"lowercase"
]
},
"autocomplete_search": {
"tokenizer": "lowercase"
}
},
"tokenizer": {
"autocomplete": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter"
]
}
}
}
},
"mappings": {
"properties": {
"title": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "autocomplete_search"
}
}
}
}
PUT my_index/_doc/1
{
"title": "Quick Foxes"
}
POST my_index/_refresh
GET my_index/_search
{
"query": {
"match": {
"title": {
"query": "Quick Fo",
"operator": "and"
}
}
}
}