| 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187118811891190119111921193119411951196119711981199120012011202120312041205120612071208120912101211121212131214121512161217121812191220122112221223122412251226122712281229123012311232123312341235123612371238123912401241124212431244124512461247124812491250125112521253125412551256125712581259126012611262126312641265126612671268126912701271127212731274127512761277127812791280128112821283128412851286128712881289129012911292129312941295129612971298129913001301130213031304130513061307130813091310131113121313131413151316131713181319132013211322132313241325132613271328132913301331133213331334133513361337133813391340134113421343134413451346134713481349135013511352135313541355135613571358135913601361136213631364136513661367136813691370137113721373137413751376137713781379138013811382138313841385138613871388138913901391139213931394139513961397139813991400140114021403140414051406140714081409141014111412141314141415141614171418141914201421142214231424142514261427142814291430143114321433143414351436143714381439144014411442144314441445144614471448144914501451145214531454145514561457145814591460146114621463146414651466146714681469147014711472147314741475147614771478147914801481148214831484148514861487148814891490149114921493149414951496149714981499150015011502150315041505150615071508150915101511151215131514151515161517151815191520152115221523152415251526 | [[analysis-lang-analyzer]]=== Language AnalyzersA set of analyzers aimed at analyzing specific language text. Thefollowing types are supported:<<arabic-analyzer,`arabic`>>,<<armenian-analyzer,`armenian`>>,<<basque-analyzer,`basque`>>,<<brazilian-analyzer,`brazilian`>>,<<bulgarian-analyzer,`bulgarian`>>,<<catalan-analyzer,`catalan`>>,<<chinese-analyzer,`chinese`>>,<<cjk-analyzer,`cjk`>>,<<czech-analyzer,`czech`>>,<<danish-analyzer,`danish`>>,<<dutch-analyzer,`dutch`>>,<<english-analyzer,`english`>>,<<finnish-analyzer,`finnish`>>,<<french-analyzer,`french`>>,<<galician-analyzer,`galician`>>,<<german-analyzer,`german`>>,<<greek-analyzer,`greek`>>,<<hindi-analyzer,`hindi`>>,<<hungarian-analyzer,`hungarian`>>,<<indonesian-analyzer,`indonesian`>>,<<irish-analyzer,`irish`>>,<<italian-analyzer,`italian`>>,<<latvian-analyzer,`latvian`>>,<<norwegian-analyzer,`norwegian`>>,<<persian-analyzer,`persian`>>,<<portuguese-analyzer,`portuguese`>>,<<romanian-analyzer,`romanian`>>,<<russian-analyzer,`russian`>>,<<sorani-analyzer,`sorani`>>,<<spanish-analyzer,`spanish`>>,<<swedish-analyzer,`swedish`>>,<<turkish-analyzer,`turkish`>>,<<thai-analyzer,`thai`>>.==== Configuring language analyzers===== StopwordsAll analyzers support setting custom `stopwords` either internally inthe config, or by using an external stopwords file by setting`stopwords_path`. Check <<analysis-stop-analyzer,Stop Analyzer>> formore details.===== Excluding words from stemmingThe `stem_exclusion` parameter allows you to specify an arrayof lowercase words that should not be stemmed.  Internally, thisfunctionality is implemented by adding the<<analysis-keyword-marker-tokenfilter,`keyword_marker` token filter>>with the `keywords` set to the value of the `stem_exclusion` parameter.The following analyzers support setting custom `stem_exclusion` list:`arabic`, `armenian`, `basque`, `catalan`, `bulgarian`, `catalan`,`czech`, `finnish`, `dutch`, `english`, `finnish`, `french`, `galician`,`german`, `irish`, `hindi`, `hungarian`, `indonesian`, `italian`, `latvian`, `norwegian`,`portuguese`, `romanian`, `russian`, `sorani`, `spanish`, `swedish`, `turkish`.==== Reimplementing language analyzersThe built-in language analyzers can be reimplemented as `custom` analyzers(as described below) in order to customize their behaviour.NOTE: If you do not intend to exclude words from being stemmed (theequivalent of the `stem_exclusion` parameter above), then you should removethe `keyword_marker` token filter from the custom analyzer configuration.[[arabic-analyzer]]===== `arabic` analyzerThe `arabic` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "arabic_stop": {          "type":       "stop",          "stopwords":  "_arabic_" <1>        },        "arabic_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "arabic_stemmer": {          "type":       "stemmer",          "language":   "arabic"        }      },      "analyzer": {        "arabic": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "arabic_stop",            "arabic_normalization",            "arabic_keywords",            "arabic_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[armenian-analyzer]]===== `armenian` analyzerThe `armenian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "armenian_stop": {          "type":       "stop",          "stopwords":  "_armenian_" <1>        },        "armenian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "armenian_stemmer": {          "type":       "stemmer",          "language":   "armenian"        }      },      "analyzer": {        "armenian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "armenian_stop",            "armenian_keywords",            "armenian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[basque-analyzer]]===== `basque` analyzerThe `basque` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "basque_stop": {          "type":       "stop",          "stopwords":  "_basque_" <1>        },        "basque_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "basque_stemmer": {          "type":       "stemmer",          "language":   "basque"        }      },      "analyzer": {        "basque": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "basque_stop",            "basque_keywords",            "basque_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[brazilian-analyzer]]===== `brazilian` analyzerThe `brazilian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "brazilian_stop": {          "type":       "stop",          "stopwords":  "_brazilian_" <1>        },        "brazilian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "brazilian_stemmer": {          "type":       "stemmer",          "language":   "brazilian"        }      },      "analyzer": {        "brazilian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "brazilian_stop",            "brazilian_keywords",            "brazilian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[bulgarian-analyzer]]===== `bulgarian` analyzerThe `bulgarian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "bulgarian_stop": {          "type":       "stop",          "stopwords":  "_bulgarian_" <1>        },        "bulgarian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "bulgarian_stemmer": {          "type":       "stemmer",          "language":   "bulgarian"        }      },      "analyzer": {        "bulgarian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "bulgarian_stop",            "bulgarian_keywords",            "bulgarian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[catalan-analyzer]]===== `catalan` analyzerThe `catalan` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "catalan_elision": {        "type":         "elision",            "articles": [ "d", "l", "m", "n", "s", "t"]        },        "catalan_stop": {          "type":       "stop",          "stopwords":  "_catalan_" <1>        },        "catalan_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "catalan_stemmer": {          "type":       "stemmer",          "language":   "catalan"        }      },      "analyzer": {        "catalan": {          "tokenizer":  "standard",          "filter": [            "catalan_elision",            "lowercase",            "catalan_stop",            "catalan_keywords",            "catalan_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[chinese-analyzer]]===== `chinese` analyzerThe `chinese` analyzer cannot be reimplemented as a `custom` analyzerbecause it depends on the ChineseTokenizer and ChineseFilter classes,which are not exposed in Elasticsearch.  These classes aredeprecated in Lucene 4 and the `chinese` analyzer will be replacedwith the <<analysis-standard-analyzer>> in Lucene 5.[[cjk-analyzer]]===== `cjk` analyzerThe `cjk` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "english_stop": {          "type":       "stop",          "stopwords":  "_english_" <1>        }      },      "analyzer": {        "cjk": {          "tokenizer":  "standard",          "filter": [            "cjk_width",            "lowercase",            "cjk_bigram",            "english_stop"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.[[czech-analyzer]]===== `czech` analyzerThe `czech` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "czech_stop": {          "type":       "stop",          "stopwords":  "_czech_" <1>        },        "czech_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "czech_stemmer": {          "type":       "stemmer",          "language":   "czech"        }      },      "analyzer": {        "czech": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "czech_stop",            "czech_keywords",            "czech_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[danish-analyzer]]===== `danish` analyzerThe `danish` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "danish_stop": {          "type":       "stop",          "stopwords":  "_danish_" <1>        },        "danish_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "danish_stemmer": {          "type":       "stemmer",          "language":   "danish"        }      },      "analyzer": {        "danish": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "danish_stop",            "danish_keywords",            "danish_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[dutch-analyzer]]===== `dutch` analyzerThe `dutch` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "dutch_stop": {          "type":       "stop",          "stopwords":  "_dutch_" <1>        },        "dutch_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "dutch_stemmer": {          "type":       "stemmer",          "language":   "dutch"        },        "dutch_override": {          "type":       "stemmer_override",          "rules": [            "fiets=>fiets",            "bromfiets=>bromfiets",            "ei=>eier",            "kind=>kinder"          ]        }      },      "analyzer": {        "dutch": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "dutch_stop",            "dutch_keywords",            "dutch_override",            "dutch_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[english-analyzer]]===== `english` analyzerThe `english` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "english_stop": {          "type":       "stop",          "stopwords":  "_english_" <1>        },        "english_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "english_stemmer": {          "type":       "stemmer",          "language":   "english"        },        "english_possessive_stemmer": {          "type":       "stemmer",          "language":   "possessive_english"        }      },      "analyzer": {        "english": {          "tokenizer":  "standard",          "filter": [            "english_possessive_stemmer",            "lowercase",            "english_stop",            "english_keywords",            "english_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[finnish-analyzer]]===== `finnish` analyzerThe `finnish` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "finnish_stop": {          "type":       "stop",          "stopwords":  "_finnish_" <1>        },        "finnish_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "finnish_stemmer": {          "type":       "stemmer",          "language":   "finnish"        }      },      "analyzer": {        "finnish": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "finnish_stop",            "finnish_keywords",            "finnish_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[french-analyzer]]===== `french` analyzerThe `french` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "french_elision": {        "type":         "elision",            "articles": [ "l", "m", "t", "qu", "n", "s",                          "j", "d", "c", "jusqu", "quoiqu",                          "lorsqu", "puisqu"                        ]        },        "french_stop": {          "type":       "stop",          "stopwords":  "_french_" <1>        },        "french_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "french_stemmer": {          "type":       "stemmer",          "language":   "light_french"        }      },      "analyzer": {        "french": {          "tokenizer":  "standard",          "filter": [            "french_elision",            "lowercase",            "french_stop",            "french_keywords",            "french_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[galician-analyzer]]===== `galician` analyzerThe `galician` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "galician_stop": {          "type":       "stop",          "stopwords":  "_galician_" <1>        },        "galician_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "galician_stemmer": {          "type":       "stemmer",          "language":   "galician"        }      },      "analyzer": {        "galician": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "galician_stop",            "galician_keywords",            "galician_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[german-analyzer]]===== `german` analyzerThe `german` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "german_stop": {          "type":       "stop",          "stopwords":  "_german_" <1>        },        "german_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "german_stemmer": {          "type":       "stemmer",          "language":   "light_german"        }      },      "analyzer": {        "german": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "german_stop",            "german_keywords",            "german_normalization",            "german_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[greek-analyzer]]===== `greek` analyzerThe `greek` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "greek_stop": {          "type":       "stop",          "stopwords":  "_greek_" <1>        },        "greek_lowercase": {          "type":       "lowercase",          "language":   "greek"        },        "greek_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "greek_stemmer": {          "type":       "stemmer",          "language":   "greek"        }      },      "analyzer": {        "greek": {          "tokenizer":  "standard",          "filter": [            "greek_lowercase",            "greek_stop",            "greek_keywords",            "greek_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[hindi-analyzer]]===== `hindi` analyzerThe `hindi` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "hindi_stop": {          "type":       "stop",          "stopwords":  "_hindi_" <1>        },        "hindi_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "hindi_stemmer": {          "type":       "stemmer",          "language":   "hindi"        }      },      "analyzer": {        "hindi": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "indic_normalization",            "hindi_normalization",            "hindi_stop",            "hindi_keywords",            "hindi_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[hungarian-analyzer]]===== `hungarian` analyzerThe `hungarian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "hungarian_stop": {          "type":       "stop",          "stopwords":  "_hungarian_" <1>        },        "hungarian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "hungarian_stemmer": {          "type":       "stemmer",          "language":   "hungarian"        }      },      "analyzer": {        "hungarian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "hungarian_stop",            "hungarian_keywords",            "hungarian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[indonesian-analyzer]]===== `indonesian` analyzerThe `indonesian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "indonesian_stop": {          "type":       "stop",          "stopwords":  "_indonesian_" <1>        },        "indonesian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "indonesian_stemmer": {          "type":       "stemmer",          "language":   "indonesian"        }      },      "analyzer": {        "indonesian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "indonesian_stop",            "indonesian_keywords",            "indonesian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[irish-analyzer]]===== `irish` analyzerThe `irish` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "irish_elision": {          "type":       "elision",          "articles": [ "h", "n", "t" ]        },        "irish_stop": {          "type":       "stop",          "stopwords":  "_irish_" <1>        },        "irish_lowercase": {          "type":       "lowercase",          "language":   "irish"        },        "irish_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "irish_stemmer": {          "type":       "stemmer",          "language":   "irish"        }      },      "analyzer": {        "irish": {          "tokenizer":  "standard",          "filter": [            "irish_stop",            "irish_elision",            "irish_lowercase",            "irish_keywords",            "irish_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[italian-analyzer]]===== `italian` analyzerThe `italian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "italian_elision": {        "type":         "elision",            "articles": [                "c", "l", "all", "dall", "dell",                "nell", "sull", "coll", "pell",                "gl", "agl", "dagl", "degl", "negl",                "sugl", "un", "m", "t", "s", "v", "d"            ]        },        "italian_stop": {          "type":       "stop",          "stopwords":  "_italian_" <1>        },        "italian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "italian_stemmer": {          "type":       "stemmer",          "language":   "light_italian"        }      },      "analyzer": {        "italian": {          "tokenizer":  "standard",          "filter": [            "italian_elision",            "lowercase",            "italian_stop",            "italian_keywords",            "italian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[latvian-analyzer]]===== `latvian` analyzerThe `latvian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "latvian_stop": {          "type":       "stop",          "stopwords":  "_latvian_" <1>        },        "latvian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "latvian_stemmer": {          "type":       "stemmer",          "language":   "latvian"        }      },      "analyzer": {        "latvian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "latvian_stop",            "latvian_keywords",            "latvian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[norwegian-analyzer]]===== `norwegian` analyzerThe `norwegian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "norwegian_stop": {          "type":       "stop",          "stopwords":  "_norwegian_" <1>        },        "norwegian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "norwegian_stemmer": {          "type":       "stemmer",          "language":   "norwegian"        }      },      "analyzer": {        "norwegian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "norwegian_stop",            "norwegian_keywords",            "norwegian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[persian-analyzer]]===== `persian` analyzerThe `persian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "char_filter": {        "zero_width_spaces": {            "type":       "mapping",            "mappings": [ "\\u200C=> "] <1>        }      },      "filter": {        "persian_stop": {          "type":       "stop",          "stopwords":  "_persian_" <2>        }      },      "analyzer": {        "persian": {          "tokenizer":     "standard",          "char_filter": [ "zero_width_spaces" ],          "filter": [            "lowercase",            "arabic_normalization",            "persian_normalization",            "persian_stop"          ]        }      }    }  }}----------------------------------------------------<1> Replaces zero-width non-joiners with an ASCII space.<2> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.[[portuguese-analyzer]]===== `portuguese` analyzerThe `portuguese` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "portuguese_stop": {          "type":       "stop",          "stopwords":  "_portuguese_" <1>        },        "portuguese_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "portuguese_stemmer": {          "type":       "stemmer",          "language":   "light_portuguese"        }      },      "analyzer": {        "portuguese": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "portuguese_stop",            "portuguese_keywords",            "portuguese_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[romanian-analyzer]]===== `romanian` analyzerThe `romanian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "romanian_stop": {          "type":       "stop",          "stopwords":  "_romanian_" <1>        },        "romanian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "romanian_stemmer": {          "type":       "stemmer",          "language":   "romanian"        }      },      "analyzer": {        "romanian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "romanian_stop",            "romanian_keywords",            "romanian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[russian-analyzer]]===== `russian` analyzerThe `russian` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "russian_stop": {          "type":       "stop",          "stopwords":  "_russian_" <1>        },        "russian_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "russian_stemmer": {          "type":       "stemmer",          "language":   "russian"        }      },      "analyzer": {        "russian": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "russian_stop",            "russian_keywords",            "russian_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[sorani-analyzer]]===== `sorani` analyzerThe `sorani` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "sorani_stop": {          "type":       "stop",          "stopwords":  "_sorani_" <1>        },        "sorani_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "sorani_stemmer": {          "type":       "stemmer",          "language":   "sorani"        }      },      "analyzer": {        "sorani": {          "tokenizer":  "standard",          "filter": [            "sorani_normalization",            "lowercase",            "sorani_stop",            "sorani_keywords",            "sorani_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[spanish-analyzer]]===== `spanish` analyzerThe `spanish` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "spanish_stop": {          "type":       "stop",          "stopwords":  "_spanish_" <1>        },        "spanish_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "spanish_stemmer": {          "type":       "stemmer",          "language":   "light_spanish"        }      },      "analyzer": {        "spanish": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "spanish_stop",            "spanish_keywords",            "spanish_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[swedish-analyzer]]===== `swedish` analyzerThe `swedish` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "swedish_stop": {          "type":       "stop",          "stopwords":  "_swedish_" <1>        },        "swedish_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "swedish_stemmer": {          "type":       "stemmer",          "language":   "swedish"        }      },      "analyzer": {        "swedish": {          "tokenizer":  "standard",          "filter": [            "lowercase",            "swedish_stop",            "swedish_keywords",            "swedish_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[turkish-analyzer]]===== `turkish` analyzerThe `turkish` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "turkish_stop": {          "type":       "stop",          "stopwords":  "_turkish_" <1>        },        "turkish_lowercase": {          "type":       "lowercase",          "language":   "turkish"        },        "turkish_keywords": {          "type":       "keyword_marker",          "keywords":   [] <2>        },        "turkish_stemmer": {          "type":       "stemmer",          "language":   "turkish"        }      },      "analyzer": {        "turkish": {          "tokenizer":  "standard",          "filter": [            "apostrophe",            "turkish_lowercase",            "turkish_stop",            "turkish_keywords",            "turkish_stemmer"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.<2> This filter should be removed unless there are words which should    be excluded from stemming.[[thai-analyzer]]===== `thai` analyzerThe `thai` analyzer could be reimplemented as a `custom` analyzer as follows:[source,js]----------------------------------------------------{  "settings": {    "analysis": {      "filter": {        "thai_stop": {          "type":       "stop",          "stopwords":  "_thai_" <1>        }      },      "analyzer": {        "thai": {          "tokenizer":  "thai",          "filter": [            "lowercase",            "thai_stop"          ]        }      }    }  }}----------------------------------------------------<1> The default stopwords can be overridden with the `stopwords`    or `stopwords_path` parameters.
 |