navigation_title: "Stemmer" mapped_pages:
Provides algorithmic stemming for several languages, some with additional variants. For a list of supported languages, see the language
parameter.
When not customized, the filter uses the porter stemming algorithm for English.
The following analyze API request uses the stemmer
filter’s default porter stemming algorithm to stem the foxes jumping quickly
to the fox jump quickli
:
GET /_analyze
{
"tokenizer": "standard",
"filter": [ "stemmer" ],
"text": "the foxes jumping quickly"
}
The filter produces the following tokens:
[ the, fox, jump, quickli ]
The following create index API request uses the stemmer
filter to configure a new custom analyzer.
PUT /my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "whitespace",
"filter": [ "stemmer" ]
}
}
}
}
}
$$$analysis-stemmer-tokenfilter-language-parm$$$
language
: (Optional, string) Language-dependent stemming algorithm used to stem tokens. If both this and the name
parameter are specified, the language
parameter argument is used.
:::{dropdown} Valid values for language
Valid values are sorted by language. Defaults to english
. Recommended algorithms are bolded.
arabic
armenian
basque
bengali
brazilian
bulgarian
catalan
czech
danish
english
light_english
lovins
:::{admonition} Deprecated in 8.16.0
This language was deprecated in 8.16.0.
:::minimal_english
porter2
possessive_english
estonian
galician
minimal_galician
(Plural step only)greek
hindi
indonesian
irish
sorani
latvian
lithuanian
persian
romanian
serbian
turkish
:::name
: An alias for the language
parameter. If both this and the language
parameter are specified, the language
parameter argument is used.
To customize the stemmer
filter, duplicate it to create the basis for a new custom token filter. You can modify the filter using its configurable parameters.
For example, the following request creates a custom stemmer
filter that stems words using the light_german
algorithm:
PUT /my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"my_stemmer"
]
}
},
"filter": {
"my_stemmer": {
"type": "stemmer",
"language": "light_german"
}
}
}
}
}