12345678910111213141516171819202122232425262728293031323334 |
- [[analysis-tokenizers]]
- == Tokenizers
- Tokenizers are used to break a string down into a stream of terms
- or tokens. A simple tokenizer might split the string up into terms
- wherever it encounters whitespace or punctuation.
- Elasticsearch has a number of built in tokenizers which can be
- used to build <<analysis-custom-analyzer,custom analyzers>>.
- include::tokenizers/standard-tokenizer.asciidoc[]
- include::tokenizers/edgengram-tokenizer.asciidoc[]
- include::tokenizers/keyword-tokenizer.asciidoc[]
- include::tokenizers/letter-tokenizer.asciidoc[]
- include::tokenizers/lowercase-tokenizer.asciidoc[]
- include::tokenizers/ngram-tokenizer.asciidoc[]
- include::tokenizers/whitespace-tokenizer.asciidoc[]
- include::tokenizers/pattern-tokenizer.asciidoc[]
- include::tokenizers/uaxurlemail-tokenizer.asciidoc[]
- include::tokenizers/pathhierarchy-tokenizer.asciidoc[]
- include::tokenizers/classic-tokenizer.asciidoc[]
- include::tokenizers/thai-tokenizer.asciidoc[]
|