termvectors.asciidoc 9.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336
  1. [[docs-termvectors]]
  2. == Term Vectors
  3. Returns information and statistics on terms in the fields of a particular
  4. document. The document could be stored in the index or artificially provided
  5. by the user. Term vectors are <<realtime,realtime>> by default, not near
  6. realtime. This can be changed by setting `realtime` parameter to `false`.
  7. [source,js]
  8. --------------------------------------------------
  9. curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?pretty=true'
  10. --------------------------------------------------
  11. Optionally, you can specify the fields for which the information is
  12. retrieved either with a parameter in the url
  13. [source,js]
  14. --------------------------------------------------
  15. curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?fields=text,...'
  16. --------------------------------------------------
  17. or by adding the requested fields in the request body (see
  18. example below). Fields can also be specified with wildcards
  19. in similar way to the <<query-dsl-multi-match-query,multi match query>>
  20. [float]
  21. === Return values
  22. Three types of values can be requested: _term information_, _term statistics_
  23. and _field statistics_. By default, all term information and field
  24. statistics are returned for all fields but no term statistics.
  25. [float]
  26. ==== Term information
  27. * term frequency in the field (always returned)
  28. * term positions (`positions` : true)
  29. * start and end offsets (`offsets` : true)
  30. * term payloads (`payloads` : true), as base64 encoded bytes
  31. If the requested information wasn't stored in the index, it will be
  32. computed on the fly if possible. Additionally, term vectors could be computed
  33. for documents not even existing in the index, but instead provided by the user.
  34. [WARNING]
  35. ======
  36. Start and end offsets assume UTF-16 encoding is being used. If you want to use
  37. these offsets in order to get the original text that produced this token, you
  38. should make sure that the string you are taking a sub-string of is also encoded
  39. using UTF-16.
  40. ======
  41. [float]
  42. ==== Term statistics
  43. Setting `term_statistics` to `true` (default is `false`) will
  44. return
  45. * total term frequency (how often a term occurs in all documents) +
  46. * document frequency (the number of documents containing the current
  47. term)
  48. By default these values are not returned since term statistics can
  49. have a serious performance impact.
  50. [float]
  51. ==== Field statistics
  52. Setting `field_statistics` to `false` (default is `true`) will
  53. omit :
  54. * document count (how many documents contain this field)
  55. * sum of document frequencies (the sum of document frequencies for all
  56. terms in this field)
  57. * sum of total term frequencies (the sum of total term frequencies of
  58. each term in this field)
  59. [float]
  60. ==== Distributed frequencies coming[2.0]
  61. Setting `dfs` to `true` (default is `false`) will return the term statistics
  62. or the field statistics of the entire index, and not just at the shard. Use it
  63. with caution as distributed frequencies can have a serious performance impact.
  64. [float]
  65. === Behaviour
  66. The term and field statistics are not accurate. Deleted documents
  67. are not taken into account. The information is only retrieved for the
  68. shard the requested document resides in, unless `dfs` is set to `true`.
  69. The term and field statistics are therefore only useful as relative measures
  70. whereas the absolute numbers have no meaning in this context. By default,
  71. when requesting term vectors of artificial documents, a shard to get the statistics
  72. from is randomly selected. Use `routing` only to hit a particular shard.
  73. [float]
  74. === Example 1
  75. First, we create an index that stores term vectors, payloads etc. :
  76. [source,js]
  77. --------------------------------------------------
  78. curl -s -XPUT 'http://localhost:9200/twitter/' -d '{
  79. "mappings": {
  80. "tweet": {
  81. "properties": {
  82. "text": {
  83. "type": "string",
  84. "term_vector": "with_positions_offsets_payloads",
  85. "store" : true,
  86. "index_analyzer" : "fulltext_analyzer"
  87. },
  88. "fullname": {
  89. "type": "string",
  90. "term_vector": "with_positions_offsets_payloads",
  91. "index_analyzer" : "fulltext_analyzer"
  92. }
  93. }
  94. }
  95. },
  96. "settings" : {
  97. "index" : {
  98. "number_of_shards" : 1,
  99. "number_of_replicas" : 0
  100. },
  101. "analysis": {
  102. "analyzer": {
  103. "fulltext_analyzer": {
  104. "type": "custom",
  105. "tokenizer": "whitespace",
  106. "filter": [
  107. "lowercase",
  108. "type_as_payload"
  109. ]
  110. }
  111. }
  112. }
  113. }
  114. }'
  115. --------------------------------------------------
  116. Second, we add some documents:
  117. [source,js]
  118. --------------------------------------------------
  119. curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty=true' -d '{
  120. "fullname" : "John Doe",
  121. "text" : "twitter test test test "
  122. }'
  123. curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty=true' -d '{
  124. "fullname" : "Jane Doe",
  125. "text" : "Another twitter test ..."
  126. }'
  127. --------------------------------------------------
  128. The following request returns all information and statistics for field
  129. `text` in document `1` (John Doe):
  130. [source,js]
  131. --------------------------------------------------
  132. curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?pretty=true' -d '{
  133. "fields" : ["text"],
  134. "offsets" : true,
  135. "payloads" : true,
  136. "positions" : true,
  137. "term_statistics" : true,
  138. "field_statistics" : true
  139. }'
  140. --------------------------------------------------
  141. Response:
  142. [source,js]
  143. --------------------------------------------------
  144. {
  145. "_id": "1",
  146. "_index": "twitter",
  147. "_type": "tweet",
  148. "_version": 1,
  149. "found": true,
  150. "term_vectors": {
  151. "text": {
  152. "field_statistics": {
  153. "doc_count": 2,
  154. "sum_doc_freq": 6,
  155. "sum_ttf": 8
  156. },
  157. "terms": {
  158. "test": {
  159. "doc_freq": 2,
  160. "term_freq": 3,
  161. "tokens": [
  162. {
  163. "end_offset": 12,
  164. "payload": "d29yZA==",
  165. "position": 1,
  166. "start_offset": 8
  167. },
  168. {
  169. "end_offset": 17,
  170. "payload": "d29yZA==",
  171. "position": 2,
  172. "start_offset": 13
  173. },
  174. {
  175. "end_offset": 22,
  176. "payload": "d29yZA==",
  177. "position": 3,
  178. "start_offset": 18
  179. }
  180. ],
  181. "ttf": 4
  182. },
  183. "twitter": {
  184. "doc_freq": 2,
  185. "term_freq": 1,
  186. "tokens": [
  187. {
  188. "end_offset": 7,
  189. "payload": "d29yZA==",
  190. "position": 0,
  191. "start_offset": 0
  192. }
  193. ],
  194. "ttf": 2
  195. }
  196. }
  197. }
  198. }
  199. }
  200. --------------------------------------------------
  201. [float]
  202. === Example 2
  203. Term vectors which are not explicitly stored in the index are automatically
  204. computed on the fly. The following request returns all information and statistics for the
  205. fields in document `1`, even though the terms haven't been explicitly stored in the index.
  206. Note that for the field `text`, the terms are not re-generated.
  207. [source,js]
  208. --------------------------------------------------
  209. curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?pretty=true' -d '{
  210. "fields" : ["text", "some_field_without_term_vectors"],
  211. "offsets" : true,
  212. "positions" : true,
  213. "term_statistics" : true,
  214. "field_statistics" : true
  215. }'
  216. --------------------------------------------------
  217. [float]
  218. [[docs-termvectors-artificial-doc]]
  219. === Example 3
  220. Term vectors can also be generated for artificial documents,
  221. that is for documents not present in the index. The syntax is similar to the
  222. <<search-percolate,percolator>> API. For example, the following request would
  223. return the same results as in example 1. The mapping used is determined by the
  224. `index` and `type`.
  225. [WARNING]
  226. ======
  227. If dynamic mapping is turned on (default), the document fields not in the original
  228. mapping will be dynamically created.
  229. ======
  230. [source,js]
  231. --------------------------------------------------
  232. curl -XGET 'http://localhost:9200/twitter/tweet/_termvector' -d '{
  233. "doc" : {
  234. "fullname" : "John Doe",
  235. "text" : "twitter test test test"
  236. }
  237. }'
  238. --------------------------------------------------
  239. [float]
  240. [[docs-termvectors-per-field-analyzer]]
  241. === Example 4
  242. Additionally, a different analyzer than the one at the field may be provided
  243. by using the `per_field_analyzer` parameter. This is useful in order to
  244. generate term vectors in any fashion, especially when using artificial
  245. documents. When providing an analyzer for a field that already stores term
  246. vectors, the term vectors will be re-generated.
  247. [source,js]
  248. --------------------------------------------------
  249. curl -XGET 'http://localhost:9200/twitter/tweet/_termvector' -d '{
  250. "doc" : {
  251. "fullname" : "John Doe",
  252. "text" : "twitter test test test"
  253. },
  254. "fields": ["fullname"],
  255. "per_field_analyzer" : {
  256. "fullname": "keyword"
  257. }
  258. }'
  259. --------------------------------------------------
  260. Response:
  261. [source,js]
  262. --------------------------------------------------
  263. {
  264. "_index": "twitter",
  265. "_type": "tweet",
  266. "_version": 0,
  267. "found": true,
  268. "term_vectors": {
  269. "fullname": {
  270. "field_statistics": {
  271. "sum_doc_freq": 1,
  272. "doc_count": 1,
  273. "sum_ttf": 1
  274. },
  275. "terms": {
  276. "John Doe": {
  277. "term_freq": 1,
  278. "tokens": [
  279. {
  280. "position": 0,
  281. "start_offset": 0,
  282. "end_offset": 8
  283. }
  284. ]
  285. }
  286. }
  287. }
  288. }
  289. }
  290. --------------------------------------------------