| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657 | [[analysis-normalizers]]== NormalizersNormalizers are similar to analyzers except that they may only emit a singletoken. As a consequence, they do not have a tokenizer and only accept a subsetof the available char filters and token filters. Only the filters that work ona per-character basis are allowed. For instance a lowercasing filter would beallowed, but not a stemming filter, which needs to look at the keyword as awhole. The current list of filters that can be used in a normalizer isfollowing: `arabic_normalization`, `asciifolding`, `bengali_normalization`,`cjk_width`, `decimal_digit`, `elision`, `german_normalization`,`hindi_normalization`, `indic_normalization`, `lowercase`,`persian_normalization`, `scandinavian_folding`, `serbian_normalization`,`sorani_normalization`, `uppercase`.[float]=== Custom normalizersElasticsearch does not ship with built-in normalizers so far, so the only wayto get one is by building a custom one. Custom normalizers take a list of char<<analysis-charfilters, character filters>> and a list of<<analysis-tokenfilters,token filters>>.[source,console]--------------------------------PUT index{  "settings": {    "analysis": {      "char_filter": {        "quote": {          "type": "mapping",          "mappings": [            "« => \"",            "» => \""          ]        }      },      "normalizer": {        "my_normalizer": {          "type": "custom",          "char_filter": ["quote"],          "filter": ["lowercase", "asciifolding"]        }      }    }  },  "mappings": {    "properties": {      "foo": {        "type": "keyword",        "normalizer": "my_normalizer"      }    }  }}--------------------------------
 |