| 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859 | [[analysis-normalizers]]== NormalizersNormalizers are similar to analyzers except that they may only emit a singletoken. As a consequence, they do not have a tokenizer and only accept a subsetof the available char filters and token filters. Only the filters that work ona per-character basis are allowed. For instance a lowercasing filter would beallowed, but not a stemming filter, which needs to look at the keyword as awhole. The current list of filters that can be used in a normalizer isfollowing: `arabic_normalization`, `asciifolding`, `bengali_normalization`,`cjk_width`, `decimal_digit`, `elision`, `german_normalization`,`hindi_normalization`, `indic_normalization`, `lowercase`,`persian_normalization`, `scandinavian_folding`, `serbian_normalization`,`sorani_normalization`, `uppercase`.Elasticsearch ships with a `lowercase` built-in normalizer. For other forms ofnormalization a custom configuration is required.[discrete]=== Custom normalizersCustom normalizers take a list of<<analysis-charfilters, character filters>> and a list of<<analysis-tokenfilters,token filters>>.[source,console]--------------------------------PUT index{  "settings": {    "analysis": {      "char_filter": {        "quote": {          "type": "mapping",          "mappings": [            "« => \"",            "» => \""          ]        }      },      "normalizer": {        "my_normalizer": {          "type": "custom",          "char_filter": ["quote"],          "filter": ["lowercase", "asciifolding"]        }      }    }  },  "mappings": {    "properties": {      "foo": {        "type": "keyword",        "normalizer": "my_normalizer"      }    }  }}--------------------------------
 |