Browse Source

Reindex: Clean up docs around multi-index

We have an example in `reindex`'s docs about copying from many indices
at once. It doesn't work at the moment because we only allow a single
type per index. We didn't notice it in the docs tests because those
tests didn't copy any documents. This change:
1. Adds documents to the docs tests to fully exercise the snippet.
2. Fixes the example by moving all copied documents to the same type.
3. Moves the note about id collisions and expands on it because it is
even more likely than before.

Closes #35150
Nik Everett 7 years ago
parent
commit
59a43180a5
1 changed files with 10 additions and 9 deletions
  1. 10 9
      docs/reference/docs/reindex.asciidoc

+ 10 - 9
docs/reference/docs/reindex.asciidoc

@@ -164,13 +164,7 @@ POST _reindex
 
 `index` and `type` in `source` can both be lists, allowing you to copy from
 lots of sources in one request. This will copy documents from the `_doc` and
-`post` types in the `twitter` and `blog` index. The copied documents would include the
-`post` type in the `twitter` index and the `_doc` type in the `blog` index. For more
-specific parameters, you can use `query`.
-
-The Reindex API makes no effort to handle ID collisions. For such issues, the target index
-will remain valid, but it's not easy to predict which document will survive because
-the iteration order isn't well defined.
+`post` types in the `twitter` and `blog` indices.
 
 [source,js]
 --------------------------------------------------
@@ -181,12 +175,19 @@ POST _reindex
     "type": ["_doc", "post"]
   },
   "dest": {
-    "index": "all_together"
+    "index": "all_together",
+    "type": "_doc"
   }
 }
 --------------------------------------------------
 // CONSOLE
-// TEST[s/^/PUT twitter\nPUT blog\n/]
+// TEST[setup:twitter]
+// TEST[s/^/PUT blog\/post\/post1?refresh\n{"test": "foo"}\n/]
+
+NOTE: The Reindex API makes no effort to handle ID collisions so the last
+document written will "win" but the order isn't usually predictable so it is
+not a good idea to rely on this behavior. Instead, make sure that IDs are unique
+using a script.
 
 It's also possible to limit the number of processed documents by setting
 `size`. This will only copy a single document from `twitter` to