mlt-query.asciidoc 8.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242
  1. [[query-dsl-mlt-query]]
  2. === More Like This Query
  3. The More Like This Query (MLT Query) finds documents that are "like" a given
  4. set of documents. In order to do so, MLT selects a set of representative terms
  5. of these input documents, forms a query using these terms, executes the query
  6. and returns the results. The user controls the input documents, how the terms
  7. should be selected and how the query is formed. `more_like_this` can be
  8. shortened to `mlt` deprecated[5.0.0,use `more_like_this` instead).
  9. The simplest use case consists of asking for documents that are similar to a
  10. provided piece of text. Here, we are asking for all movies that have some text
  11. similar to "Once upon a time" in their "title" and in their "description"
  12. fields, limiting the number of selected terms to 12.
  13. [source,js]
  14. --------------------------------------------------
  15. {
  16. "more_like_this" : {
  17. "fields" : ["title", "description"],
  18. "like" : "Once upon a time",
  19. "min_term_freq" : 1,
  20. "max_query_terms" : 12
  21. }
  22. }
  23. --------------------------------------------------
  24. A more complicated use case consists of mixing texts with documents already
  25. existing in the index. In this case, the syntax to specify a document is
  26. similar to the one used in the <<docs-multi-get,Multi GET API>>.
  27. [source,js]
  28. --------------------------------------------------
  29. {
  30. "more_like_this" : {
  31. "fields" : ["title", "description"],
  32. "like" : [
  33. {
  34. "_index" : "imdb",
  35. "_type" : "movies",
  36. "_id" : "1"
  37. },
  38. {
  39. "_index" : "imdb",
  40. "_type" : "movies",
  41. "_id" : "2"
  42. },
  43. "and potentially some more text here as well"
  44. ],
  45. "min_term_freq" : 1,
  46. "max_query_terms" : 12
  47. }
  48. }
  49. --------------------------------------------------
  50. Finally, users can mix some texts, a chosen set of documents but also provide
  51. documents not necessarily present in the index. To provide documents not
  52. present in the index, the syntax is similar to <<docs-termvectors-artificial-doc,artificial documents>>.
  53. [source,js]
  54. --------------------------------------------------
  55. {
  56. "more_like_this" : {
  57. "fields" : ["name.first", "name.last"],
  58. "like" : [
  59. {
  60. "_index" : "marvel",
  61. "_type" : "quotes",
  62. "doc" : {
  63. "name": {
  64. "first": "Ben",
  65. "last": "Grimm"
  66. },
  67. "tweet": "You got no idea what I'd... what I'd give to be invisible."
  68. }
  69. },
  70. {
  71. "_index" : "marvel",
  72. "_type" : "quotes",
  73. "_id" : "2"
  74. }
  75. ],
  76. "min_term_freq" : 1,
  77. "max_query_terms" : 12
  78. }
  79. }
  80. --------------------------------------------------
  81. ==== How it Works
  82. Suppose we wanted to find all documents similar to a given input document.
  83. Obviously, the input document itself should be its best match for that type of
  84. query. And the reason would be mostly, according to
  85. link:https://lucene.apache.org/core/4_9_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html[Lucene scoring formula],
  86. due to the terms with the highest tf-idf. Therefore, the terms of the input
  87. document that have the highest tf-idf are good representatives of that
  88. document, and could be used within a disjunctive query (or `OR`) to retrieve similar
  89. documents. The MLT query simply extracts the text from the input document,
  90. analyzes it, usually using the same analyzer at the field, then selects the
  91. top K terms with highest tf-idf to form a disjunctive query of these terms.
  92. IMPORTANT: The fields on which to perform MLT must be indexed and of type
  93. `string`. Additionally, when using `like` with documents, either `_source`
  94. must be enabled or the fields must be `stored` or store `term_vector`. In
  95. order to speed up analysis, it could help to store term vectors at index time.
  96. For example, if we wish to perform MLT on the "title" and "tags.raw" fields,
  97. we can explicitly store their `term_vector` at index time. We can still
  98. perform MLT on the "description" and "tags" fields, as `_source` is enabled by
  99. default, but there will be no speed up on analysis for these fields.
  100. [source,js]
  101. --------------------------------------------------
  102. curl -s -XPUT 'http://localhost:9200/imdb/' -d '{
  103. "mappings": {
  104. "movies": {
  105. "properties": {
  106. "title": {
  107. "type": "text",
  108. "term_vector": "yes"
  109. },
  110. "description": {
  111. "type": "text"
  112. },
  113. "tags": {
  114. "type": "text",
  115. "fields" : {
  116. "raw": {
  117. "type" : "text",
  118. "analyzer": "keyword",
  119. "term_vector" : "yes"
  120. }
  121. }
  122. }
  123. }
  124. }
  125. }
  126. }
  127. --------------------------------------------------
  128. ==== Parameters
  129. The only required parameter is `like`, all other parameters have sensible
  130. defaults. There are three types of parameters: one to specify the document
  131. input, the other one for term selection and for query formation.
  132. [float]
  133. ==== Document Input Parameters
  134. [horizontal]
  135. `like`::
  136. The only *required* parameter of the MLT query is `like` and follows a
  137. versatile syntax, in which the user can specify free form text and/or a single
  138. or multiple documents (see examples above). The syntax to specify documents is
  139. similar to the one used by the <<docs-multi-get,Multi GET API>>. When
  140. specifying documents, the text is fetched from `fields` unless overridden in
  141. each document request. The text is analyzed by the analyzer at the field, but
  142. could also be overridden. The syntax to override the analyzer at the field
  143. follows a similar syntax to the `per_field_analyzer` parameter of the
  144. <<docs-termvectors-per-field-analyzer,Term Vectors API>>.
  145. Additionally, to provide documents not necessarily present in the index,
  146. <<docs-termvectors-artificial-doc,artificial documents>> are also supported.
  147. `unlike`::
  148. The `unlike` parameter is used in conjunction with `like` in order not to
  149. select terms found in a chosen set of documents. In other words, we could ask
  150. for documents `like: "Apple"`, but `unlike: "cake crumble tree"`. The syntax
  151. is the same as `like`.
  152. `fields`::
  153. A list of fields to fetch and analyze the text from. Defaults to the `_all`
  154. field for free text and to all possible fields for document inputs.
  155. `like_text`::
  156. The text to find documents like it.
  157. `ids` or `docs`::
  158. A list of documents following the same syntax as the <<docs-multi-get,Multi GET API>>.
  159. [float]
  160. [[mlt-query-term-selection]]
  161. ==== Term Selection Parameters
  162. [horizontal]
  163. `max_query_terms`::
  164. The maximum number of query terms that will be selected. Increasing this value
  165. gives greater accuracy at the expense of query execution speed. Defaults to
  166. `25`.
  167. `min_term_freq`::
  168. The minimum term frequency below which the terms will be ignored from the
  169. input document. Defaults to `2`.
  170. `min_doc_freq`::
  171. The minimum document frequency below which the terms will be ignored from the
  172. input document. Defaults to `5`.
  173. `max_doc_freq`::
  174. The maximum document frequency above which the terms will be ignored from the
  175. input document. This could be useful in order to ignore highly frequent words
  176. such as stop words. Defaults to unbounded (`0`).
  177. `min_word_length`::
  178. The minimum word length below which the terms will be ignored. The old name
  179. `min_word_len` is deprecated. Defaults to `0`.
  180. `max_word_length`::
  181. The maximum word length above which the terms will be ignored. The old name
  182. `max_word_len` is deprecated. Defaults to unbounded (`0`).
  183. `stop_words`::
  184. An array of stop words. Any word in this set is considered "uninteresting" and
  185. ignored. If the analyzer allows for stop words, you might want to tell MLT to
  186. explicitly ignore them, as for the purposes of document similarity it seems
  187. reasonable to assume that "a stop word is never interesting".
  188. `analyzer`::
  189. The analyzer that is used to analyze the free form text. Defaults to the
  190. analyzer associated with the first field in `fields`.
  191. [float]
  192. ==== Query Formation Parameters
  193. [horizontal]
  194. `minimum_should_match`::
  195. After the disjunctive query has been formed, this parameter controls the
  196. number of terms that must match.
  197. The syntax is the same as the <<query-dsl-minimum-should-match,minimum should match>>.
  198. (Defaults to `"30%"`).
  199. `boost_terms`::
  200. Each term in the formed query could be further boosted by their tf-idf score.
  201. This sets the boost factor to use when using this feature. Defaults to
  202. deactivated (`0`). Any other positive value activates terms boosting with the
  203. given boost factor.
  204. `include`::
  205. Specifies whether the input documents should also be included in the search
  206. results returned. Defaults to `false`.
  207. `boost`::
  208. Sets the boost value of the whole query. Defaults to `1.0`.