kuromoji analyzer

edit

The kuromoji analyzer consists of the following tokenizer and token filters:

It supports the mode and user_dictionary settings from kuromoji_tokenizer.

Normalize full-width characters

edit

The kuromoji_tokenizer tokenizer uses characters from the MeCab-IPADIC dictionary to split text into tokens. The dictionary includes some full-width characters, such as and . If a text contains full-width characters, the tokenizer can produce unexpected tokens.

For example, the kuromoji_tokenizer tokenizer converts the text Culture of Japan to the tokens [ culture, o, f, japan ] instead of [ culture, of, japan ].

To avoid this, add the icu_normalizer character filter to a custom analyzer based on the kuromoji analyzer. The icu_normalizer character filter converts full-width characters to their normal equivalents.

First, duplicate the kuromoji analyzer to create the basis for a custom analyzer. Then add the icu_normalizer character filter to the custom analyzer. For example:

PUT index-00001
{
  "settings": {
    "index": {
      "analysis": {
        "analyzer": {
          "kuromoji_normalize": {                 
            "char_filter": [
              "icu_normalizer"                    
            ],
            "tokenizer": "kuromoji_tokenizer",
            "filter": [
              "kuromoji_baseform",
              "kuromoji_part_of_speech",
              "cjk_width",
              "ja_stop",
              "kuromoji_stemmer",
              "lowercase"
            ]
          }
        }
      }
    }
  }
}

Creates a new custom analyzer, kuromoji_normalize, based on the kuromoji analyzer.

Adds the icu_normalizer character filter to the analyzer.