123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244 |
- [[analysis-stop-analyzer]]
- === Stop Analyzer
- The `stop` analyzer is the same as the <<analysis-simple-analyzer,`simple` analyzer>>
- but adds support for removing stop words. It defaults to using the
- `_english_` stop words.
- [float]
- === Definition
- It consists of:
- Tokenizer::
- * <<analysis-lowercase-tokenizer,Lower Case Tokenizer>>
- Token filters::
- * <<analysis-stop-tokenfilter,Stop Token Filter>>
- [float]
- === Example output
- [source,js]
- ---------------------------
- POST _analyze
- {
- "analyzer": "stop",
- "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
- }
- ---------------------------
- // CONSOLE
- /////////////////////
- [source,js]
- ----------------------------
- {
- "tokens": [
- {
- "token": "quick",
- "start_offset": 6,
- "end_offset": 11,
- "type": "word",
- "position": 1
- },
- {
- "token": "brown",
- "start_offset": 12,
- "end_offset": 17,
- "type": "word",
- "position": 2
- },
- {
- "token": "foxes",
- "start_offset": 18,
- "end_offset": 23,
- "type": "word",
- "position": 3
- },
- {
- "token": "jumped",
- "start_offset": 24,
- "end_offset": 30,
- "type": "word",
- "position": 4
- },
- {
- "token": "over",
- "start_offset": 31,
- "end_offset": 35,
- "type": "word",
- "position": 5
- },
- {
- "token": "lazy",
- "start_offset": 40,
- "end_offset": 44,
- "type": "word",
- "position": 7
- },
- {
- "token": "dog",
- "start_offset": 45,
- "end_offset": 48,
- "type": "word",
- "position": 8
- },
- {
- "token": "s",
- "start_offset": 49,
- "end_offset": 50,
- "type": "word",
- "position": 9
- },
- {
- "token": "bone",
- "start_offset": 51,
- "end_offset": 55,
- "type": "word",
- "position": 10
- }
- ]
- }
- ----------------------------
- // TESTRESPONSE
- /////////////////////
- The above sentence would produce the following terms:
- [source,text]
- ---------------------------
- [ quick, brown, foxes, jumped, over, lazy, dog, s, bone ]
- ---------------------------
- [float]
- === Configuration
- The `stop` analyzer accepts the following parameters:
- [horizontal]
- `stopwords`::
- A pre-defined stop words list like `_english_` or an array containing a
- list of stop words. Defaults to `_english_`.
- `stopwords_path`::
- The path to a file containing stop words.
- See the <<analysis-stop-tokenfilter,Stop Token Filter>> for more information
- about stop word configuration.
- [float]
- === Example configuration
- In this example, we configure the `stop` analyzer to use a specified list of
- words as stop words:
- [source,js]
- ----------------------------
- PUT my_index
- {
- "settings": {
- "analysis": {
- "analyzer": {
- "my_stop_analyzer": {
- "type": "stop",
- "stopwords": ["the", "over"]
- }
- }
- }
- }
- }
- GET _cluster/health?wait_for_status=yellow
- POST my_index/_analyze
- {
- "analyzer": "my_stop_analyzer",
- "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
- }
- ----------------------------
- // CONSOLE
- /////////////////////
- [source,js]
- ----------------------------
- {
- "tokens": [
- {
- "token": "quick",
- "start_offset": 6,
- "end_offset": 11,
- "type": "word",
- "position": 1
- },
- {
- "token": "brown",
- "start_offset": 12,
- "end_offset": 17,
- "type": "word",
- "position": 2
- },
- {
- "token": "foxes",
- "start_offset": 18,
- "end_offset": 23,
- "type": "word",
- "position": 3
- },
- {
- "token": "jumped",
- "start_offset": 24,
- "end_offset": 30,
- "type": "word",
- "position": 4
- },
- {
- "token": "lazy",
- "start_offset": 40,
- "end_offset": 44,
- "type": "word",
- "position": 7
- },
- {
- "token": "dog",
- "start_offset": 45,
- "end_offset": 48,
- "type": "word",
- "position": 8
- },
- {
- "token": "s",
- "start_offset": 49,
- "end_offset": 50,
- "type": "word",
- "position": 9
- },
- {
- "token": "bone",
- "start_offset": 51,
- "end_offset": 55,
- "type": "word",
- "position": 10
- }
- ]
- }
- ----------------------------
- // TESTRESPONSE
- /////////////////////
- The above example produces the following terms:
- [source,text]
- ---------------------------
- [ quick, brown, foxes, jumped, lazy, dog, s, bone ]
- ---------------------------
|