Analysis

edit

The index analysis module acts as a configurable registry of Analyzers that can be used in order to break down indexed (analyzed) fields when a document is indexed as well as to process query strings. It maps to the Lucene Analyzer.

Analyzers are (generally) composed of a single Tokenizer and zero or more TokenFilters. A set of CharFilters can be associated with an analyzer to process the characters prior to other analysis steps. The analysis module allows one to register TokenFilters, Tokenizers and Analyzers under logical names that can then be referenced either in mapping definitions or in certain APIs. The Analysis module automatically registers (if not explicitly defined) built in analyzers, token filters, and tokenizers.

See Analysis for configuration details.