Forráskód Böngészése

Fixed a small typo in tokenizer-reference.md (#135138)

Co-authored-by: Szymon Bialkowski <szymon.bialkowski@elastic.co>
Imad Saddik 2 hete
szülő
commit
a3e9a2ade0
1 módosított fájl, 1 hozzáadás és 1 törlés
  1. 1 1
      docs/reference/text-analysis/tokenizer-reference.md

+ 1 - 1
docs/reference/text-analysis/tokenizer-reference.md

@@ -8,7 +8,7 @@ mapped_pages:
 ::::{admonition} Difference between {{es}} tokenization and neural tokenization
 :class: note
 
-{{es}}'s tokenization process produces linguistic tokens, optimized for search and retrieval. This differs from neural tokenization in the context of machine learning and natural language processing. Neural tokenizers translate strings into smaller, subword tokens, which are encoded into vectors for consumptions by neural networks. {{es}} does not have built-in neural tokenizers.
+{{es}}'s tokenization process produces linguistic tokens, optimized for search and retrieval. This differs from neural tokenization in the context of machine learning and natural language processing. Neural tokenizers translate strings into smaller, subword tokens, which are encoded into vectors for consumption by neural networks. {{es}} does not have built-in neural tokenizers.
 
 ::::