mlt-query.asciidoc 8.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252
  1. [[query-dsl-mlt-query]]
  2. === More like this query
  3. ++++
  4. <titleabbrev>More like this</titleabbrev>
  5. ++++
  6. The More Like This Query finds documents that are "like" a given
  7. set of documents. In order to do so, MLT selects a set of representative terms
  8. of these input documents, forms a query using these terms, executes the query
  9. and returns the results. The user controls the input documents, how the terms
  10. should be selected and how the query is formed.
  11. The simplest use case consists of asking for documents that are similar to a
  12. provided piece of text. Here, we are asking for all movies that have some text
  13. similar to "Once upon a time" in their "title" and in their "description"
  14. fields, limiting the number of selected terms to 12.
  15. [source,js]
  16. --------------------------------------------------
  17. GET /_search
  18. {
  19. "query": {
  20. "more_like_this" : {
  21. "fields" : ["title", "description"],
  22. "like" : "Once upon a time",
  23. "min_term_freq" : 1,
  24. "max_query_terms" : 12
  25. }
  26. }
  27. }
  28. --------------------------------------------------
  29. // CONSOLE
  30. A more complicated use case consists of mixing texts with documents already
  31. existing in the index. In this case, the syntax to specify a document is
  32. similar to the one used in the <<docs-multi-get,Multi GET API>>.
  33. [source,js]
  34. --------------------------------------------------
  35. GET /_search
  36. {
  37. "query": {
  38. "more_like_this" : {
  39. "fields" : ["title", "description"],
  40. "like" : [
  41. {
  42. "_index" : "imdb",
  43. "_id" : "1"
  44. },
  45. {
  46. "_index" : "imdb",
  47. "_id" : "2"
  48. },
  49. "and potentially some more text here as well"
  50. ],
  51. "min_term_freq" : 1,
  52. "max_query_terms" : 12
  53. }
  54. }
  55. }
  56. --------------------------------------------------
  57. // CONSOLE
  58. Finally, users can mix some texts, a chosen set of documents but also provide
  59. documents not necessarily present in the index. To provide documents not
  60. present in the index, the syntax is similar to <<docs-termvectors-artificial-doc,artificial documents>>.
  61. [source,js]
  62. --------------------------------------------------
  63. GET /_search
  64. {
  65. "query": {
  66. "more_like_this" : {
  67. "fields" : ["name.first", "name.last"],
  68. "like" : [
  69. {
  70. "_index" : "marvel",
  71. "doc" : {
  72. "name": {
  73. "first": "Ben",
  74. "last": "Grimm"
  75. },
  76. "_doc": "You got no idea what I'd... what I'd give to be invisible."
  77. }
  78. },
  79. {
  80. "_index" : "marvel",
  81. "_id" : "2"
  82. }
  83. ],
  84. "min_term_freq" : 1,
  85. "max_query_terms" : 12
  86. }
  87. }
  88. }
  89. --------------------------------------------------
  90. // CONSOLE
  91. ==== How it Works
  92. Suppose we wanted to find all documents similar to a given input document.
  93. Obviously, the input document itself should be its best match for that type of
  94. query. And the reason would be mostly, according to
  95. link:https://lucene.apache.org/core/4_9_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html[Lucene scoring formula],
  96. due to the terms with the highest tf-idf. Therefore, the terms of the input
  97. document that have the highest tf-idf are good representatives of that
  98. document, and could be used within a disjunctive query (or `OR`) to retrieve similar
  99. documents. The MLT query simply extracts the text from the input document,
  100. analyzes it, usually using the same analyzer at the field, then selects the
  101. top K terms with highest tf-idf to form a disjunctive query of these terms.
  102. IMPORTANT: The fields on which to perform MLT must be indexed and of type
  103. `text` or `keyword``. Additionally, when using `like` with documents, either
  104. `_source` must be enabled or the fields must be `stored` or store
  105. `term_vector`. In order to speed up analysis, it could help to store term
  106. vectors at index time.
  107. For example, if we wish to perform MLT on the "title" and "tags.raw" fields,
  108. we can explicitly store their `term_vector` at index time. We can still
  109. perform MLT on the "description" and "tags" fields, as `_source` is enabled by
  110. default, but there will be no speed up on analysis for these fields.
  111. [source,js]
  112. --------------------------------------------------
  113. PUT /imdb
  114. {
  115. "mappings": {
  116. "properties": {
  117. "title": {
  118. "type": "text",
  119. "term_vector": "yes"
  120. },
  121. "description": {
  122. "type": "text"
  123. },
  124. "tags": {
  125. "type": "text",
  126. "fields" : {
  127. "raw": {
  128. "type" : "text",
  129. "analyzer": "keyword",
  130. "term_vector" : "yes"
  131. }
  132. }
  133. }
  134. }
  135. }
  136. }
  137. --------------------------------------------------
  138. // CONSOLE
  139. ==== Parameters
  140. The only required parameter is `like`, all other parameters have sensible
  141. defaults. There are three types of parameters: one to specify the document
  142. input, the other one for term selection and for query formation.
  143. [float]
  144. ==== Document Input Parameters
  145. [horizontal]
  146. `like`::
  147. The only *required* parameter of the MLT query is `like` and follows a
  148. versatile syntax, in which the user can specify free form text and/or a single
  149. or multiple documents (see examples above). The syntax to specify documents is
  150. similar to the one used by the <<docs-multi-get,Multi GET API>>. When
  151. specifying documents, the text is fetched from `fields` unless overridden in
  152. each document request. The text is analyzed by the analyzer at the field, but
  153. could also be overridden. The syntax to override the analyzer at the field
  154. follows a similar syntax to the `per_field_analyzer` parameter of the
  155. <<docs-termvectors-per-field-analyzer,Term Vectors API>>.
  156. Additionally, to provide documents not necessarily present in the index,
  157. <<docs-termvectors-artificial-doc,artificial documents>> are also supported.
  158. `unlike`::
  159. The `unlike` parameter is used in conjunction with `like` in order not to
  160. select terms found in a chosen set of documents. In other words, we could ask
  161. for documents `like: "Apple"`, but `unlike: "cake crumble tree"`. The syntax
  162. is the same as `like`.
  163. `fields`::
  164. A list of fields to fetch and analyze the text from.
  165. [float]
  166. [[mlt-query-term-selection]]
  167. ==== Term Selection Parameters
  168. [horizontal]
  169. `max_query_terms`::
  170. The maximum number of query terms that will be selected. Increasing this value
  171. gives greater accuracy at the expense of query execution speed. Defaults to
  172. `25`.
  173. `min_term_freq`::
  174. The minimum term frequency below which the terms will be ignored from the
  175. input document. Defaults to `2`.
  176. `min_doc_freq`::
  177. The minimum document frequency below which the terms will be ignored from the
  178. input document. Defaults to `5`.
  179. `max_doc_freq`::
  180. The maximum document frequency above which the terms will be ignored from the
  181. input document. This could be useful in order to ignore highly frequent words
  182. such as stop words. Defaults to unbounded (`0`).
  183. `min_word_length`::
  184. The minimum word length below which the terms will be ignored. The old name
  185. `min_word_len` is deprecated. Defaults to `0`.
  186. `max_word_length`::
  187. The maximum word length above which the terms will be ignored. The old name
  188. `max_word_len` is deprecated. Defaults to unbounded (`0`).
  189. `stop_words`::
  190. An array of stop words. Any word in this set is considered "uninteresting" and
  191. ignored. If the analyzer allows for stop words, you might want to tell MLT to
  192. explicitly ignore them, as for the purposes of document similarity it seems
  193. reasonable to assume that "a stop word is never interesting".
  194. `analyzer`::
  195. The analyzer that is used to analyze the free form text. Defaults to the
  196. analyzer associated with the first field in `fields`.
  197. [float]
  198. ==== Query Formation Parameters
  199. [horizontal]
  200. `minimum_should_match`::
  201. After the disjunctive query has been formed, this parameter controls the
  202. number of terms that must match.
  203. The syntax is the same as the <<query-dsl-minimum-should-match,minimum should match>>.
  204. (Defaults to `"30%"`).
  205. `fail_on_unsupported_field`::
  206. Controls whether the query should fail (throw an exception) if any of the
  207. specified fields are not of the supported types
  208. (`text` or `keyword`). Set this to `false` to ignore the field and continue
  209. processing. Defaults to `true`.
  210. `boost_terms`::
  211. Each term in the formed query could be further boosted by their tf-idf score.
  212. This sets the boost factor to use when using this feature. Defaults to
  213. deactivated (`0`). Any other positive value activates terms boosting with the
  214. given boost factor.
  215. `include`::
  216. Specifies whether the input documents should also be included in the search
  217. results returned. Defaults to `false`.
  218. `boost`::
  219. Sets the boost value of the whole query. Defaults to `1.0`.