Browse Source

Removes old Lucene's experimental flag from analyzer documentations (#53217)

This change removes the Lucene's experimental flag from the documentations of the following
tokenizer/filters:
  * Simple Pattern Split Tokenizer
  * Simple Pattern tokenizer
  * Flatten Graph Token Filter
  * Word Delimiter Graph Token Filter

The flag is still present in Lucene codebase but we're fully supporting these tokenizers/filters
in ES for a long time now so the docs flag is misleading.

Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Jim Ferenczi 5 years ago
parent
commit
9ad0597617

+ 0 - 2
docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc

@@ -4,8 +4,6 @@
 <titleabbrev>Flatten graph</titleabbrev>
 ++++
 
-experimental[This functionality is marked as experimental in Lucene]
-
 The `flatten_graph` token filter accepts an arbitrary graph token
 stream, such as that produced by
 <<analysis-synonym-graph-tokenfilter>>, and flattens it into a single

+ 0 - 2
docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc

@@ -1,8 +1,6 @@
 [[analysis-simplepattern-tokenizer]]
 === Simple Pattern Tokenizer
 
-experimental[This functionality is marked as experimental in Lucene]
-
 The `simple_pattern` tokenizer uses a regular expression to capture matching
 text as terms. The set of regular expression features it supports is more
 limited than the <<analysis-pattern-tokenizer,`pattern`>> tokenizer, but the

+ 0 - 2
docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc

@@ -1,8 +1,6 @@
 [[analysis-simplepatternsplit-tokenizer]]
 === Simple Pattern Split Tokenizer
 
-experimental[This functionality is marked as experimental in Lucene]
-
 The `simple_pattern_split` tokenizer uses a regular expression to split the
 input into terms at pattern matches. The set of regular expression features it
 supports is more limited than the <<analysis-pattern-tokenizer,`pattern`>>