tokenizers.asciidoc 961 B

12345678910111213141516171819202122232425262728293031323334
  1. [[analysis-tokenizers]]
  2. == Tokenizers
  3. Tokenizers are used to break a string down into a stream of terms
  4. or tokens. A simple tokenizer might split the string up into terms
  5. wherever it encounters whitespace or punctuation.
  6. Elasticsearch has a number of built in tokenizers which can be
  7. used to build <<analysis-custom-analyzer,custom analyzers>>.
  8. include::tokenizers/standard-tokenizer.asciidoc[]
  9. include::tokenizers/edgengram-tokenizer.asciidoc[]
  10. include::tokenizers/keyword-tokenizer.asciidoc[]
  11. include::tokenizers/letter-tokenizer.asciidoc[]
  12. include::tokenizers/lowercase-tokenizer.asciidoc[]
  13. include::tokenizers/ngram-tokenizer.asciidoc[]
  14. include::tokenizers/whitespace-tokenizer.asciidoc[]
  15. include::tokenizers/pattern-tokenizer.asciidoc[]
  16. include::tokenizers/uaxurlemail-tokenizer.asciidoc[]
  17. include::tokenizers/pathhierarchy-tokenizer.asciidoc[]
  18. include::tokenizers/classic-tokenizer.asciidoc[]
  19. include::tokenizers/thai-tokenizer.asciidoc[]