analysis.asciidoc 1.2 KB

123456789101112131415161718192021222324
  1. [[breaking_70_analysis_changes]]
  2. === Analysis changes
  3. ==== Limiting the number of tokens produced by _analyze
  4. To safeguard against out of memory errors, the number of tokens that can be produced
  5. using the `_analyze` endpoint has been limited to 10000. This default limit can be changed
  6. for a particular index with the index setting `index.analyze.max_token_count`.
  7. ==== Limiting the length of an analyzed text during highlighting
  8. Highlighting a text that was indexed without offsets or term vectors,
  9. requires analysis of this text in memory real time during the search request.
  10. For large texts this analysis may take substantial amount of time and memory.
  11. To protect against this, the maximum number of characters that will be analyzed has been
  12. limited to 1000000. This default limit can be changed
  13. for a particular index with the index setting `index.highlight.max_analyzed_offset`.
  14. ==== `delimited_payload_filter` renaming
  15. The `delimited_payload_filter` was deprecated and renamed to `delimited_payload` in 6.2.
  16. Using it in indices created before 7.0 will issue deprecation warnings. Using the old
  17. name in new indices created in 7.0 will throw an error. Use the new name `delimited_payload`
  18. instead.