mlt-query.asciidoc 8.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248
  1. [[query-dsl-mlt-query]]
  2. === More like this query
  3. ++++
  4. <titleabbrev>More like this</titleabbrev>
  5. ++++
  6. The More Like This Query finds documents that are "like" a given
  7. set of documents. In order to do so, MLT selects a set of representative terms
  8. of these input documents, forms a query using these terms, executes the query
  9. and returns the results. The user controls the input documents, how the terms
  10. should be selected and how the query is formed.
  11. The simplest use case consists of asking for documents that are similar to a
  12. provided piece of text. Here, we are asking for all movies that have some text
  13. similar to "Once upon a time" in their "title" and in their "description"
  14. fields, limiting the number of selected terms to 12.
  15. [source,console]
  16. --------------------------------------------------
  17. GET /_search
  18. {
  19. "query": {
  20. "more_like_this" : {
  21. "fields" : ["title", "description"],
  22. "like" : "Once upon a time",
  23. "min_term_freq" : 1,
  24. "max_query_terms" : 12
  25. }
  26. }
  27. }
  28. --------------------------------------------------
  29. A more complicated use case consists of mixing texts with documents already
  30. existing in the index. In this case, the syntax to specify a document is
  31. similar to the one used in the <<docs-multi-get,Multi GET API>>.
  32. [source,console]
  33. --------------------------------------------------
  34. GET /_search
  35. {
  36. "query": {
  37. "more_like_this" : {
  38. "fields" : ["title", "description"],
  39. "like" : [
  40. {
  41. "_index" : "imdb",
  42. "_id" : "1"
  43. },
  44. {
  45. "_index" : "imdb",
  46. "_id" : "2"
  47. },
  48. "and potentially some more text here as well"
  49. ],
  50. "min_term_freq" : 1,
  51. "max_query_terms" : 12
  52. }
  53. }
  54. }
  55. --------------------------------------------------
  56. Finally, users can mix some texts, a chosen set of documents but also provide
  57. documents not necessarily present in the index. To provide documents not
  58. present in the index, the syntax is similar to <<docs-termvectors-artificial-doc,artificial documents>>.
  59. [source,console]
  60. --------------------------------------------------
  61. GET /_search
  62. {
  63. "query": {
  64. "more_like_this" : {
  65. "fields" : ["name.first", "name.last"],
  66. "like" : [
  67. {
  68. "_index" : "marvel",
  69. "doc" : {
  70. "name": {
  71. "first": "Ben",
  72. "last": "Grimm"
  73. },
  74. "_doc": "You got no idea what I'd... what I'd give to be invisible."
  75. }
  76. },
  77. {
  78. "_index" : "marvel",
  79. "_id" : "2"
  80. }
  81. ],
  82. "min_term_freq" : 1,
  83. "max_query_terms" : 12
  84. }
  85. }
  86. }
  87. --------------------------------------------------
  88. ==== How it Works
  89. Suppose we wanted to find all documents similar to a given input document.
  90. Obviously, the input document itself should be its best match for that type of
  91. query. And the reason would be mostly, according to
  92. link:https://lucene.apache.org/core/4_9_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html[Lucene scoring formula],
  93. due to the terms with the highest tf-idf. Therefore, the terms of the input
  94. document that have the highest tf-idf are good representatives of that
  95. document, and could be used within a disjunctive query (or `OR`) to retrieve similar
  96. documents. The MLT query simply extracts the text from the input document,
  97. analyzes it, usually using the same analyzer at the field, then selects the
  98. top K terms with highest tf-idf to form a disjunctive query of these terms.
  99. IMPORTANT: The fields on which to perform MLT must be indexed and of type
  100. `text` or `keyword``. Additionally, when using `like` with documents, either
  101. `_source` must be enabled or the fields must be `stored` or store
  102. `term_vector`. In order to speed up analysis, it could help to store term
  103. vectors at index time.
  104. For example, if we wish to perform MLT on the "title" and "tags.raw" fields,
  105. we can explicitly store their `term_vector` at index time. We can still
  106. perform MLT on the "description" and "tags" fields, as `_source` is enabled by
  107. default, but there will be no speed up on analysis for these fields.
  108. [source,console]
  109. --------------------------------------------------
  110. PUT /imdb
  111. {
  112. "mappings": {
  113. "properties": {
  114. "title": {
  115. "type": "text",
  116. "term_vector": "yes"
  117. },
  118. "description": {
  119. "type": "text"
  120. },
  121. "tags": {
  122. "type": "text",
  123. "fields" : {
  124. "raw": {
  125. "type" : "text",
  126. "analyzer": "keyword",
  127. "term_vector" : "yes"
  128. }
  129. }
  130. }
  131. }
  132. }
  133. }
  134. --------------------------------------------------
  135. ==== Parameters
  136. The only required parameter is `like`, all other parameters have sensible
  137. defaults. There are three types of parameters: one to specify the document
  138. input, the other one for term selection and for query formation.
  139. [float]
  140. ==== Document Input Parameters
  141. [horizontal]
  142. `like`::
  143. The only *required* parameter of the MLT query is `like` and follows a
  144. versatile syntax, in which the user can specify free form text and/or a single
  145. or multiple documents (see examples above). The syntax to specify documents is
  146. similar to the one used by the <<docs-multi-get,Multi GET API>>. When
  147. specifying documents, the text is fetched from `fields` unless overridden in
  148. each document request. The text is analyzed by the analyzer at the field, but
  149. could also be overridden. The syntax to override the analyzer at the field
  150. follows a similar syntax to the `per_field_analyzer` parameter of the
  151. <<docs-termvectors-per-field-analyzer,Term Vectors API>>.
  152. Additionally, to provide documents not necessarily present in the index,
  153. <<docs-termvectors-artificial-doc,artificial documents>> are also supported.
  154. `unlike`::
  155. The `unlike` parameter is used in conjunction with `like` in order not to
  156. select terms found in a chosen set of documents. In other words, we could ask
  157. for documents `like: "Apple"`, but `unlike: "cake crumble tree"`. The syntax
  158. is the same as `like`.
  159. `fields`::
  160. A list of fields to fetch and analyze the text from.
  161. [float]
  162. [[mlt-query-term-selection]]
  163. ==== Term Selection Parameters
  164. [horizontal]
  165. `max_query_terms`::
  166. The maximum number of query terms that will be selected. Increasing this value
  167. gives greater accuracy at the expense of query execution speed. Defaults to
  168. `25`.
  169. `min_term_freq`::
  170. The minimum term frequency below which the terms will be ignored from the
  171. input document. Defaults to `2`.
  172. `min_doc_freq`::
  173. The minimum document frequency below which the terms will be ignored from the
  174. input document. Defaults to `5`.
  175. `max_doc_freq`::
  176. The maximum document frequency above which the terms will be ignored from the
  177. input document. This could be useful in order to ignore highly frequent words
  178. such as stop words. Defaults to unbounded (`0`).
  179. `min_word_length`::
  180. The minimum word length below which the terms will be ignored. The old name
  181. `min_word_len` is deprecated. Defaults to `0`.
  182. `max_word_length`::
  183. The maximum word length above which the terms will be ignored. The old name
  184. `max_word_len` is deprecated. Defaults to unbounded (`0`).
  185. `stop_words`::
  186. An array of stop words. Any word in this set is considered "uninteresting" and
  187. ignored. If the analyzer allows for stop words, you might want to tell MLT to
  188. explicitly ignore them, as for the purposes of document similarity it seems
  189. reasonable to assume that "a stop word is never interesting".
  190. `analyzer`::
  191. The analyzer that is used to analyze the free form text. Defaults to the
  192. analyzer associated with the first field in `fields`.
  193. [float]
  194. ==== Query Formation Parameters
  195. [horizontal]
  196. `minimum_should_match`::
  197. After the disjunctive query has been formed, this parameter controls the
  198. number of terms that must match.
  199. The syntax is the same as the <<query-dsl-minimum-should-match,minimum should match>>.
  200. (Defaults to `"30%"`).
  201. `fail_on_unsupported_field`::
  202. Controls whether the query should fail (throw an exception) if any of the
  203. specified fields are not of the supported types
  204. (`text` or `keyword`). Set this to `false` to ignore the field and continue
  205. processing. Defaults to `true`.
  206. `boost_terms`::
  207. Each term in the formed query could be further boosted by their tf-idf score.
  208. This sets the boost factor to use when using this feature. Defaults to
  209. deactivated (`0`). Any other positive value activates terms boosting with the
  210. given boost factor.
  211. `include`::
  212. Specifies whether the input documents should also be included in the search
  213. results returned. Defaults to `false`.
  214. `boost`::
  215. Sets the boost value of the whole query. Defaults to `1.0`.