termvectors.asciidoc 6.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224
  1. [[docs-termvectors]]
  2. == Term Vectors
  3. added[1.0.0.Beta1]
  4. Returns information and statistics on terms in the fields of a
  5. particular document as stored in the index.
  6. [source,js]
  7. --------------------------------------------------
  8. curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?pretty=true'
  9. --------------------------------------------------
  10. Optionally, you can specify the fields for which the information is
  11. retrieved either with a parameter in the url
  12. [source,js]
  13. --------------------------------------------------
  14. curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?fields=text,...'
  15. --------------------------------------------------
  16. or adding by adding the requested fields in the request body (see
  17. example below).
  18. [float]
  19. === Return values
  20. Three types of values can be requested: _term information_, _term statistics_
  21. and _field statistics_. By default, all term information and field
  22. statistics are returned for all fields but no term statistics.
  23. [float]
  24. ==== Term information
  25. * term frequency in the field (always returned)
  26. * term positions (`positions` : true)
  27. * start and end offsets (`offsets` : true)
  28. * term payloads (`payloads` : true), as base64 encoded bytes
  29. If the requested information wasn't stored in the index, it will be
  30. omitted without further warning. See <<mapping-types,type mapping>>
  31. for how to configure your index to store term vectors.
  32. [WARNING]
  33. ======
  34. Start and end offsets assume UTF-16 encoding is being used. If you want to use
  35. these offsets in order to get the original text that produced this token, you
  36. should make sure that the string you are taking a sub-string of is also encoded
  37. using UTF-16.
  38. ======
  39. [float]
  40. ==== Term statistics
  41. Setting `term_statistics` to `true` (default is `false`) will
  42. return
  43. * total term frequency (how often a term occurs in all documents) +
  44. * document frequency (the number of documents containing the current
  45. term)
  46. By default these values are not returned since term statistics can
  47. have a serious performance impact.
  48. [float]
  49. ==== Field statistics
  50. Setting `field_statistics` to `false` (default is `true`) will
  51. omit :
  52. * document count (how many documents contain this field)
  53. * sum of document frequencies (the sum of document frequencies for all
  54. terms in this field)
  55. * sum of total term frequencies (the sum of total term frequencies of
  56. each term in this field)
  57. [float]
  58. === Behaviour
  59. The term and field statistics are not accurate. Deleted documents
  60. are not taken into account. The information is only retrieved for the
  61. shard the requested document resides in. The term and field statistics
  62. are therefore only useful as relative measures whereas the absolute
  63. numbers have no meaning in this context.
  64. [float]
  65. === Example
  66. First, we create an index that stores term vectors, payloads etc. :
  67. [source,js]
  68. --------------------------------------------------
  69. curl -s -XPUT 'http://localhost:9200/twitter/' -d '{
  70. "mappings": {
  71. "tweet": {
  72. "properties": {
  73. "text": {
  74. "type": "string",
  75. "term_vector": "with_positions_offsets_payloads",
  76. "store" : true,
  77. "index_analyzer" : "fulltext_analyzer"
  78. },
  79. "fullname": {
  80. "type": "string",
  81. "term_vector": "with_positions_offsets_payloads",
  82. "index_analyzer" : "fulltext_analyzer"
  83. }
  84. }
  85. }
  86. },
  87. "settings" : {
  88. "index" : {
  89. "number_of_shards" : 1,
  90. "number_of_replicas" : 0
  91. },
  92. "analysis": {
  93. "analyzer": {
  94. "fulltext_analyzer": {
  95. "type": "custom",
  96. "tokenizer": "whitespace",
  97. "filter": [
  98. "lowercase",
  99. "type_as_payload"
  100. ]
  101. }
  102. }
  103. }
  104. }
  105. }'
  106. --------------------------------------------------
  107. Second, we add some documents:
  108. [source,js]
  109. --------------------------------------------------
  110. curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty=true' -d '{
  111. "fullname" : "John Doe",
  112. "text" : "twitter test test test "
  113. }'
  114. curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty=true' -d '{
  115. "fullname" : "Jane Doe",
  116. "text" : "Another twitter test ..."
  117. }'
  118. --------------------------------------------------
  119. The following request returns all information and statistics for field
  120. `text` in document `1` (John Doe):
  121. [source,js]
  122. --------------------------------------------------
  123. curl -XGET 'http://localhost:9200/twitter/tweet/1/_termvector?pretty=true' -d '{
  124. "fields" : ["text"],
  125. "offsets" : true,
  126. "payloads" : true,
  127. "positions" : true,
  128. "term_statistics" : true,
  129. "field_statistics" : true
  130. }'
  131. --------------------------------------------------
  132. Response:
  133. [source,js]
  134. --------------------------------------------------
  135. {
  136. "_id": "1",
  137. "_index": "twitter",
  138. "_type": "tweet",
  139. "_version": 1,
  140. "found": true,
  141. "term_vectors": {
  142. "text": {
  143. "field_statistics": {
  144. "doc_count": 2,
  145. "sum_doc_freq": 6,
  146. "sum_ttf": 8
  147. },
  148. "terms": {
  149. "test": {
  150. "doc_freq": 2,
  151. "term_freq": 3,
  152. "tokens": [
  153. {
  154. "end_offset": 12,
  155. "payload": "d29yZA==",
  156. "position": 1,
  157. "start_offset": 8
  158. },
  159. {
  160. "end_offset": 17,
  161. "payload": "d29yZA==",
  162. "position": 2,
  163. "start_offset": 13
  164. },
  165. {
  166. "end_offset": 22,
  167. "payload": "d29yZA==",
  168. "position": 3,
  169. "start_offset": 18
  170. }
  171. ],
  172. "ttf": 4
  173. },
  174. "twitter": {
  175. "doc_freq": 2,
  176. "term_freq": 1,
  177. "tokens": [
  178. {
  179. "end_offset": 7,
  180. "payload": "d29yZA==",
  181. "position": 0,
  182. "start_offset": 0
  183. }
  184. ],
  185. "ttf": 2
  186. }
  187. }
  188. }
  189. }
  190. }
  191. --------------------------------------------------