ilm-with-existing-indices.asciidoc 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411
  1. [role="xpack"]
  2. [testenv="basic"]
  3. [[ilm-with-existing-indices]]
  4. == Using {ilm-init} with existing indices
  5. While it is recommended to use {ilm-init} to manage the index lifecycle from
  6. start to finish, it may be useful to use {ilm-init} with existing indices,
  7. particularly when transitioning from an alternative method of managing the index
  8. lifecycle such as Curator, or when migrating from daily indices to
  9. rollover-based indices. Such use cases are fully supported, but there are some
  10. configuration differences from when {ilm-init} can manage the complete index
  11. lifecycle.
  12. This section describes strategies to leverage {ilm-init} for existing periodic
  13. indices when migrating to fully {ilm-init}-manged indices, which can be done in
  14. a few different ways, each providing different tradeoffs. As an example, we'll
  15. walk through a use case of a very simple logging index with just a field for the
  16. log message and a timestamp.
  17. First, we need to create a template for these indices:
  18. [source,console]
  19. -----------------------
  20. PUT _template/mylogs_template
  21. {
  22. "index_patterns": [
  23. "mylogs-*"
  24. ],
  25. "settings": {
  26. "number_of_shards": 1,
  27. "number_of_replicas": 1
  28. },
  29. "mappings": {
  30. "properties": {
  31. "message": {
  32. "type": "text"
  33. },
  34. "@timestamp": {
  35. "type": "date"
  36. }
  37. }
  38. }
  39. }
  40. -----------------------
  41. And we'll ingest a few documents to create a few daily indices:
  42. [source,console]
  43. -----------------------
  44. POST mylogs-pre-ilm-2019.06.24/_doc
  45. {
  46. "@timestamp": "2019-06-24T10:34:00",
  47. "message": "this is one log message"
  48. }
  49. -----------------------
  50. // TEST[continued]
  51. [source,console]
  52. -----------------------
  53. POST mylogs-pre-ilm-2019.06.25/_doc
  54. {
  55. "@timestamp": "2019-06-25T17:42:00",
  56. "message": "this is another log message"
  57. }
  58. -----------------------
  59. // TEST[continued]
  60. Now that we have these indices, we'll look at a few different ways of migrating
  61. these indices to ILM.
  62. [[ilm-with-existing-periodic-indices]]
  63. === Managing existing periodic indices with {ilm-init}
  64. NOTE: The examples in this section assume daily indices as set up in
  65. <<ilm-with-existing-indices,the previous section>>.
  66. The simplest way to manage existing indices while transitioning to fully
  67. {ilm-init}-managed indices is to allow all new indices to be fully managed by
  68. {ilm-init} before attaching {ilm-init} policies to existing indices. To do this,
  69. all new documents should be directed to {ilm-init}-managed indices - if you are
  70. using Beats or Logstash data shippers, upgrading all of those shippers to
  71. version 7.0.0 or higher will take care of that part for you. If you are not
  72. using Beats or Logstash, you may need to set up ILM for new indices yourself as
  73. demonstrated in the <<getting-started-index-lifecycle-management,getting started
  74. guide>>.
  75. NOTE: If you are using Beats through Logstash, you may need to change your
  76. Logstash output configuration and invoke the Beats setup to use ILM for new
  77. data.
  78. Once all new documents are being written to fully {ilm-init}-managed indices, it
  79. is easy to add an {ilm-init} policy to existing indices. However, there are two
  80. things to keep in mind when doing this, and a trick that makes those two things
  81. much easier to handle.
  82. The two biggest things to keep in mind are:
  83. 1. Existing periodic indices shouldn't use policies with rollover, because
  84. rollover is used to manage where new data goes. Since existing indices should no
  85. longer be receiving new documents, there is no point to using rollover for them.
  86. 2. {ilm-init} policies attached to existing indices will compare the `min_age`
  87. for each phase to the original creation date of the index, and so might proceed
  88. through multiple phases immediately.
  89. The first one is the most important, because it makes it difficult to use the
  90. same policy for new and existing periodic indices. But that's easy to solve
  91. with one simple trick: Create a second policy for existing indices, in addition
  92. to the one for new indices. {ilm-init} policies are cheap to create, so don't be
  93. afraid to have more than one. Modifying a policy designed for new indices to be
  94. used on existing indices is generally very simple: just remove the `rollover`
  95. action.
  96. For example, if you created a policy for your new indices with each phase
  97. like so:
  98. [source,console]
  99. -----------------------
  100. PUT _ilm/policy/mylogs_policy
  101. {
  102. "policy": {
  103. "phases": {
  104. "hot": {
  105. "actions": {
  106. "rollover": {
  107. "max_size": "25GB"
  108. }
  109. }
  110. },
  111. "warm": {
  112. "min_age": "1d",
  113. "actions": {
  114. "forcemerge": {
  115. "max_num_segments": 1
  116. }
  117. }
  118. },
  119. "cold": {
  120. "min_age": "7d",
  121. "actions": {
  122. "freeze": {}
  123. }
  124. },
  125. "delete": {
  126. "min_age": "30d",
  127. "actions": {
  128. "delete": {}
  129. }
  130. }
  131. }
  132. }
  133. }
  134. -----------------------
  135. // TEST[continued]
  136. You can create a policy for pre-existing indices by removing the `rollover`
  137. action, and in this case, the `hot` phase is now empty so we can remove that
  138. too:
  139. [source,console]
  140. -----------------------
  141. PUT _ilm/policy/mylogs_policy_existing
  142. {
  143. "policy": {
  144. "phases": {
  145. "warm": {
  146. "min_age": "1d",
  147. "actions": {
  148. "forcemerge": {
  149. "max_num_segments": 1
  150. }
  151. }
  152. },
  153. "cold": {
  154. "min_age": "7d",
  155. "actions": {
  156. "freeze": {}
  157. }
  158. },
  159. "delete": {
  160. "min_age": "30d",
  161. "actions": {
  162. "delete": {}
  163. }
  164. }
  165. }
  166. }
  167. }
  168. -----------------------
  169. // TEST[continued]
  170. Creating a separate policy for existing indices will also allow using different
  171. `min_age` values. You may want to use higher values to prevent many indices from
  172. running through the policy at once, which may be important if your policy
  173. includes potentially resource-intensive operations like force merge.
  174. You can configure the lifecycle for many indices at once by using wildcards in
  175. the index name when calling the <<indices-update-settings,Update Settings API>>
  176. to set the policy name, but be careful that you don't include any indices that
  177. you don't want to change the policy for:
  178. [source,console]
  179. -----------------------
  180. PUT mylogs-pre-ilm*/_settings <1>
  181. {
  182. "index": {
  183. "lifecycle": {
  184. "name": "mylogs_policy_existing"
  185. }
  186. }
  187. }
  188. -----------------------
  189. // TEST[continued]
  190. <1> This pattern will match all indices with names that start with
  191. `mylogs-pre-ilm`
  192. Once all pre-{ilm-init} indices have aged out and been deleted, the policy for
  193. older periodic indices can be deleted.
  194. [[ilm-reindexing-into-rollover]]
  195. === Reindexing via {ilm-init}
  196. NOTE: The examples in this section assume daily indices as set up in
  197. <<ilm-with-existing-indices,the previous section>>.
  198. In some cases, it may be useful to reindex data into {ilm-init}-managed indices.
  199. This is more complex than simply attaching policies to existing indices as
  200. described in <<ilm-with-existing-periodic-indices,the previous section>>, and
  201. requires pausing indexing during the reindexing process. However, this technique
  202. may be useful in cases where periodic indices were created with very small
  203. amounts of data leading to excessive shard counts, or for indices which grow
  204. steadily over time, but have not been broken up into time-series indices leading
  205. to shards which are much too large, situations that cause significant
  206. performance problems.
  207. Before getting started with reindexing data, the new index structure should be
  208. set up. For this section, we'll be using the same setup described in
  209. <<ilm-with-existing-indices,{ilm-imit} with existing indices>>.
  210. First, we'll set up a policy with rollover, and can include any additional
  211. phases required. For simplicity, we'll just use rollover:
  212. [source,console]
  213. -----------------------
  214. PUT _ilm/policy/sample_policy
  215. {
  216. "policy": {
  217. "phases": {
  218. "hot": {
  219. "actions": {
  220. "rollover": {
  221. "max_age": "7d",
  222. "max_size": "50G"
  223. }
  224. }
  225. }
  226. }
  227. }
  228. }
  229. -----------------------
  230. // TEST[continued]
  231. And now we'll update the index template for our indices to include the relevant
  232. {ilm-init} settings:
  233. [source,console]
  234. -----------------------
  235. PUT _template/mylogs_template
  236. {
  237. "index_patterns": [
  238. "ilm-mylogs-*" <1>
  239. ],
  240. "settings": {
  241. "number_of_shards": 1,
  242. "number_of_replicas": 1,
  243. "index": {
  244. "lifecycle": {
  245. "name": "mylogs_condensed_policy", <2>
  246. "rollover_alias": "mylogs" <3>
  247. }
  248. }
  249. },
  250. "mappings": {
  251. "properties": {
  252. "message": {
  253. "type": "text"
  254. },
  255. "@timestamp": {
  256. "type": "date"
  257. }
  258. }
  259. }
  260. }
  261. -----------------------
  262. // TEST[continued]
  263. <1> The new index pattern has a prefix compared to the old one, this will
  264. make it easier to reindex later
  265. <2> The name of the policy we defined above
  266. <3> The name of the alias we'll use to write to and query
  267. And create the first index with the alias specified in the `rollover_alias`
  268. setting in the index template:
  269. [source,console]
  270. -----------------------
  271. PUT ilm-mylogs-000001
  272. {
  273. "aliases": {
  274. "mylogs": {
  275. "is_write_index": true
  276. }
  277. }
  278. }
  279. -----------------------
  280. // TEST[continued]
  281. All new documents should be indexed via the `mylogs` alias at this point. Adding
  282. new data to the old indices during the reindexing process can cause data to be
  283. added to the old indices, but not be reindexed into the new indices.
  284. NOTE: If you do not want to mix new data and old data in the new ILM-managed
  285. indices, indexing of new data should be paused entirely while the reindex
  286. completes. Mixing old and new data within one index is safe, but keep in mind
  287. that the indices with mixed data should be retained in their entirety until you
  288. are ready to delete both the old and new data.
  289. By default, {ilm-init} only checks rollover conditions every 10 minutes. Under
  290. normal indexing load, this usually works well, but during reindexing, indices
  291. can grow very, very quickly. We'll need to set the poll interval to something
  292. shorter to ensure that the new indices don't grow too large while waiting for
  293. the rollover check:
  294. [source,console]
  295. -----------------------
  296. PUT _cluster/settings
  297. {
  298. "transient": {
  299. "indices.lifecycle.poll_interval": "1m" <1>
  300. }
  301. }
  302. -----------------------
  303. // TEST[skip:don't want to overwrite this setting for other tests]
  304. <1> This tells ILM to check for rollover conditions every minute
  305. We're now ready to reindex our data using the <<docs-reindex,reindex API>>. If
  306. you have a timestamp or date field in your documents, as in this example, it may
  307. be useful to specify that the documents should be sorted by that field - this
  308. will mean that all documents in `ilm-mylogs-000001` come before all documents in
  309. `ilm-mylogs-000002`, and so on. However, if this is not a requirement, omitting
  310. the sort will allow the data to be reindexed more quickly.
  311. NOTE: Sorting in reindex is deprecated, see
  312. <<docs-reindex-api-request-body,reindex request body>>. Instead use timestamp
  313. ranges to partition data in separate reindex runs.
  314. IMPORTANT: If your data uses document IDs generated by means other than
  315. Elasticsearch's automatic ID generation, you may need to do additional
  316. processing to ensure that the document IDs don't conflict during the reindex, as
  317. documents will retain their original IDs. One way to do this is to use a
  318. <<reindex-scripts,script>> in the reindex call to append the original index name
  319. to the document ID.
  320. [source,console]
  321. -----------------------
  322. POST _reindex
  323. {
  324. "source": {
  325. "index": "mylogs-*", <1>
  326. "sort": { "@timestamp": "desc" }
  327. },
  328. "dest": {
  329. "index": "mylogs", <2>
  330. "op_type": "create" <3>
  331. }
  332. }
  333. -----------------------
  334. // TEST[continued]
  335. <1> This index pattern matches our existing indices. Using the prefix for
  336. the new indices makes using this index pattern much easier.
  337. <2> The alias set up above
  338. <3> This option will cause the reindex to abort if it encounters multiple
  339. documents with the same ID. This is optional, but recommended to prevent
  340. accidentally overwriting documents if two documents from different indices
  341. have the same ID.
  342. Once this completes, indexing new data can be resumed, as long as all new
  343. documents are indexed into the alias used above. All data, existing and new, can
  344. be queried using that alias as well. We should also be sure to set the
  345. {ilm-init} poll interval back to its default value, because keeping it set too
  346. low can cause unnecessary load on the current master node:
  347. [source,console]
  348. -----------------------
  349. PUT _cluster/settings
  350. {
  351. "transient": {
  352. "indices.lifecycle.poll_interval": null
  353. }
  354. }
  355. -----------------------
  356. // TEST[skip:don't want to overwrite this setting for other tests]
  357. All of the reindexed data should now be accessible via the alias set up above,
  358. in this case `mylogs`. Once you have verified that all the data has been
  359. reindexed and is available in the new indices, the existing indices can be
  360. safely removed.