repo-analysis-api.asciidoc 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625
  1. [role="xpack"]
  2. [[repo-analysis-api]]
  3. === Repository analysis API
  4. ++++
  5. <titleabbrev>Repository analysis</titleabbrev>
  6. ++++
  7. Analyzes a repository, reporting its performance characteristics and any
  8. incorrect behaviour found.
  9. ////
  10. [source,console]
  11. ----
  12. PUT /_snapshot/my_repository
  13. {
  14. "type": "fs",
  15. "settings": {
  16. "location": "my_backup_location"
  17. }
  18. }
  19. ----
  20. // TESTSETUP
  21. ////
  22. [source,console]
  23. ----
  24. POST /_snapshot/my_repository/_analyze?blob_count=10&max_blob_size=1mb&timeout=120s
  25. ----
  26. [[repo-analysis-api-request]]
  27. ==== {api-request-title}
  28. `POST /_snapshot/<repository>/_analyze`
  29. [[repo-analysis-api-prereqs]]
  30. ==== {api-prereq-title}
  31. * If the {es} {security-features} are enabled, you must have the `manage`
  32. <<privileges-list-cluster,cluster privilege>> to use this API. For more
  33. information, see <<security-privileges>>.
  34. * If the <<operator-privileges,{operator-feature}>> is enabled, only operator
  35. users can use this API.
  36. [[repo-analysis-api-desc]]
  37. ==== {api-description-title}
  38. There are a large number of third-party storage systems available, not all of
  39. which are suitable for use as a snapshot repository by {es}. Some storage
  40. systems behave incorrectly, or perform poorly, especially when accessed
  41. concurrently by multiple clients as the nodes of an {es} cluster do.
  42. The Repository analysis API performs a collection of read and write operations
  43. on your repository which are designed to detect incorrect behaviour and to
  44. measure the performance characteristics of your storage system.
  45. The default values for the parameters to this API are deliberately low to reduce
  46. the impact of running an analysis inadvertently and to provide a sensible
  47. starting point for your investigations. Run your first analysis with the default
  48. parameter values to check for simple problems. If successful, run a sequence of
  49. increasingly large analyses until you encounter a failure or you reach a
  50. `blob_count` of at least `2000`, a `max_blob_size` of at least `2gb`, a
  51. `max_total_data_size` of at least `1tb`, and a `register_operation_count` of at
  52. least `100`. Always specify a generous timeout, possibly `1h` or longer, to
  53. allow time for each analysis to run to completion. Perform the analyses using a
  54. multi-node cluster of a similar size to your production cluster so that it can
  55. detect any problems that only arise when the repository is accessed by many
  56. nodes at once.
  57. If the analysis fails then {es} detected that your repository behaved
  58. unexpectedly. This usually means you are using a third-party storage system
  59. with an incorrect or incompatible implementation of the API it claims to
  60. support. If so, this storage system is not suitable for use as a snapshot
  61. repository. You will need to work with the supplier of your storage system to
  62. address the incompatibilities that {es} detects. See
  63. <<self-managed-repo-types>> for more information.
  64. If the analysis is successful this API returns details of the testing process,
  65. optionally including how long each operation took. You can use this information
  66. to determine the performance of your storage system. If any operation fails or
  67. returns an incorrect result, this API returns an error. If the API returns an
  68. error then it may not have removed all the data it wrote to the repository. The
  69. error will indicate the location of any leftover data, and this path is also
  70. recorded in the {es} logs. You should verify yourself that this location has
  71. been cleaned up correctly. If there is still leftover data at the specified
  72. location then you should manually remove it.
  73. If the connection from your client to {es} is closed while the client is
  74. waiting for the result of the analysis then the test is cancelled. Some clients
  75. are configured to close their connection if no response is received within a
  76. certain timeout. An analysis takes a long time to complete so you may need to
  77. relax any such client-side timeouts. On cancellation the analysis attempts to
  78. clean up the data it was writing, but it may not be able to remove it all. The
  79. path to the leftover data is recorded in the {es} logs. You should verify
  80. yourself that this location has been cleaned up correctly. If there is still
  81. leftover data at the specified location then you should manually remove it.
  82. If the analysis is successful then it detected no incorrect behaviour, but this
  83. does not mean that correct behaviour is guaranteed. The analysis attempts to
  84. detect common bugs but it certainly does not offer 100% coverage. Additionally,
  85. it does not test the following:
  86. - Your repository must perform durable writes. Once a blob has been written it
  87. must remain in place until it is deleted, even after a power loss or similar
  88. disaster.
  89. - Your repository must not suffer from silent data corruption. Once a blob has
  90. been written its contents must remain unchanged until it is deliberately
  91. modified or deleted.
  92. - Your repository must behave correctly even if connectivity from the cluster
  93. is disrupted. Reads and writes may fail in this case, but they must not return
  94. incorrect results.
  95. IMPORTANT: An analysis writes a substantial amount of data to your repository
  96. and then reads it back again. This consumes bandwidth on the network between
  97. the cluster and the repository, and storage space and IO bandwidth on the
  98. repository itself. You must ensure this load does not affect other users of
  99. these systems. Analyses respect the repository settings
  100. `max_snapshot_bytes_per_sec` and `max_restore_bytes_per_sec` if available, and
  101. the cluster setting `indices.recovery.max_bytes_per_sec` which you can use to
  102. limit the bandwidth they consume.
  103. NOTE: This API is intended for exploratory use by humans. You should expect the
  104. request parameters and the response format to vary in future versions.
  105. NOTE: Different versions of {es} may perform different checks for repository
  106. compatibility, with newer versions typically being stricter than older ones. A
  107. storage system that passes repository analysis with one version of {es} may
  108. fail with a different version. This indicates it behaves incorrectly in ways
  109. that the former version did not detect. You must work with the supplier of your
  110. storage system to address the incompatibilities detected by the repository
  111. analysis API in any version of {es}.
  112. NOTE: This API may not work correctly in a mixed-version cluster.
  113. ==== Implementation details
  114. NOTE: This section of documentation describes how the Repository analysis API
  115. works in this version of {es}, but you should expect the implementation to vary
  116. between versions. The request parameters and response format depend on details
  117. of the implementation so may also be different in newer versions.
  118. The analysis comprises a number of blob-level tasks, as set by the `blob_count`
  119. parameter, and a number of compare-and-exchange operations on linearizable
  120. registers, as set by the `register_operation_count` parameter. These tasks are
  121. distributed over the data and master-eligible nodes in the cluster for
  122. execution.
  123. For most blob-level tasks, the executing node first writes a blob to the
  124. repository, and then instructs some of the other nodes in the cluster to
  125. attempt to read the data it just wrote. The size of the blob is chosen
  126. randomly, according to the `max_blob_size` and `max_total_data_size`
  127. parameters. If any of these reads fails then the repository does not implement
  128. the necessary read-after-write semantics that {es} requires.
  129. For some blob-level tasks, the executing node will instruct some of its peers
  130. to attempt to read the data before the writing process completes. These reads
  131. are permitted to fail, but must not return partial data. If any read returns
  132. partial data then the repository does not implement the necessary atomicity
  133. semantics that {es} requires.
  134. For some blob-level tasks, the executing node will overwrite the blob while its
  135. peers are reading it. In this case the data read may come from either the
  136. original or the overwritten blob, but the read operation must not return
  137. partial data or a mix of data from the two blobs. If any of these reads returns
  138. partial data or a mix of the two blobs then the repository does not implement
  139. the necessary atomicity semantics that {es} requires for overwrites.
  140. The executing node will use a variety of different methods to write the blob.
  141. For instance, where applicable, it will use both single-part and multi-part
  142. uploads. Similarly, the reading nodes will use a variety of different methods
  143. to read the data back again. For instance they may read the entire blob from
  144. start to end, or may read only a subset of the data.
  145. For some blob-level tasks, the executing node will abort the write before it is
  146. complete. In this case it still instructs some of the other nodes in the
  147. cluster to attempt to read the blob, but all of these reads must fail to find
  148. the blob.
  149. [[repo-analysis-api-path-params]]
  150. ==== {api-path-parms-title}
  151. `<repository>`::
  152. (Required, string)
  153. Name of the snapshot repository to test.
  154. [[repo-analysis-api-query-params]]
  155. ==== {api-query-parms-title}
  156. `blob_count`::
  157. (Optional, integer) The total number of blobs to write to the repository during
  158. the test. Defaults to `100`. For realistic experiments you should set this to
  159. at least `2000`.
  160. `max_blob_size`::
  161. (Optional, <<size-units, size units>>) The maximum size of a blob to be written
  162. during the test. Defaults to `10mb`. For realistic experiments you should set
  163. this to at least `2gb`.
  164. `max_total_data_size`::
  165. (Optional, <<size-units, size units>>) An upper limit on the total size of all
  166. the blobs written during the test. Defaults to `1gb`. For realistic experiments
  167. you should set this to at least `1tb`.
  168. `register_operation_count`::
  169. (Optional, integer) The minimum number of linearizable register operations to
  170. perform in total. Defaults to `10`. For realistic experiments you should set
  171. this to at least `100`.
  172. `timeout`::
  173. (Optional, <<time-units, time units>>) Specifies the period of time to wait for
  174. the test to complete. If no response is received before the timeout expires,
  175. the test is cancelled and returns an error. Defaults to `30s`.
  176. ===== Advanced query parameters
  177. The following parameters allow additional control over the analysis, but you
  178. will usually not need to adjust them.
  179. `concurrency`::
  180. (Optional, integer) The number of write operations to perform concurrently.
  181. Defaults to `10`.
  182. `read_node_count`::
  183. (Optional, integer) The number of nodes on which to perform a read operation
  184. after writing each blob. Defaults to `10`.
  185. `early_read_node_count`::
  186. (Optional, integer) The number of nodes on which to perform an early read
  187. operation while writing each blob. Defaults to `2`. Early read operations are
  188. only rarely performed.
  189. `rare_action_probability`::
  190. (Optional, double) The probability of performing a rare action (an early read,
  191. an overwrite, or an aborted write) on each blob. Defaults to `0.02`.
  192. `seed`::
  193. (Optional, integer) The seed for the pseudo-random number generator used to
  194. generate the list of operations performed during the test. To repeat the same
  195. set of operations in multiple experiments, use the same seed in each
  196. experiment. Note that the operations are performed concurrently so may not
  197. always happen in the same order on each run.
  198. `detailed`::
  199. (Optional, boolean) Whether to return detailed results, including timing
  200. information for every operation performed during the analysis. Defaults to
  201. `false`, meaning to return only a summary of the analysis.
  202. `rarely_abort_writes`::
  203. (Optional, boolean) Whether to rarely abort some write requests. Defaults to
  204. `true`.
  205. [role="child_attributes"]
  206. [[repo-analysis-api-response-body]]
  207. ==== {api-response-body-title}
  208. The response exposes implementation details of the analysis which may change
  209. from version to version. The response body format is therefore not considered
  210. stable and may be different in newer versions.
  211. `coordinating_node`::
  212. (object)
  213. Identifies the node which coordinated the analysis and performed the final cleanup.
  214. +
  215. .Properties of `coordinating_node`
  216. [%collapsible%open]
  217. ====
  218. `id`::
  219. (string)
  220. The id of the coordinating node.
  221. `name`::
  222. (string)
  223. The name of the coordinating node
  224. ====
  225. `repository`::
  226. (string)
  227. The name of the repository that was the subject of the analysis.
  228. `blob_count`::
  229. (integer)
  230. The number of blobs written to the repository during the test, equal to the
  231. `?blob_count` request parameter.
  232. `concurrency`::
  233. (integer)
  234. The number of write operations performed concurrently during the test, equal to
  235. the `?concurrency` request parameter.
  236. `read_node_count`::
  237. (integer)
  238. The limit on the number of nodes on which read operations were performed after
  239. writing each blob, equal to the `?read_node_count` request parameter.
  240. `early_read_node_count`::
  241. (integer)
  242. The limit on the number of nodes on which early read operations were performed
  243. after writing each blob, equal to the `?early_read_node_count` request
  244. parameter.
  245. `max_blob_size`::
  246. (string)
  247. The limit on the size of a blob written during the test, equal to the
  248. `?max_blob_size` parameter.
  249. `max_blob_size_bytes`::
  250. (long)
  251. The limit, in bytes, on the size of a blob written during the test, equal to
  252. the `?max_blob_size` parameter.
  253. `max_total_data_size`::
  254. (string)
  255. The limit on the total size of all blob written during the test, equal to the
  256. `?max_total_data_size` parameter.
  257. `max_total_data_size_bytes`::
  258. (long)
  259. The limit, in bytes, on the total size of all blob written during the test,
  260. equal to the `?max_total_data_size` parameter.
  261. `seed`::
  262. (long)
  263. The seed for the pseudo-random number generator used to generate the operations
  264. used during the test. Equal to the `?seed` request parameter if set.
  265. `rare_action_probability`::
  266. (double)
  267. The probability of performing rare actions during the test. Equal to the
  268. `?rare_action_probability` request parameter.
  269. `blob_path`::
  270. (string)
  271. The path in the repository under which all the blobs were written during the
  272. test.
  273. `issues_detected`::
  274. (list)
  275. A list of correctness issues detected, which will be empty if the API
  276. succeeded. Included to emphasize that a successful response does not guarantee
  277. correct behaviour in future.
  278. `summary`::
  279. (object)
  280. A collection of statistics that summarise the results of the test.
  281. +
  282. .Properties of `summary`
  283. [%collapsible%open]
  284. ====
  285. `write`::
  286. (object)
  287. A collection of statistics that summarise the results of the write operations
  288. in the test.
  289. +
  290. .Properties of `write`
  291. [%collapsible%open]
  292. =====
  293. `count`::
  294. (integer)
  295. The number of write operations performed in the test.
  296. `total_size`::
  297. (string)
  298. The total size of all the blobs written in the test.
  299. `total_size_bytes`::
  300. (long)
  301. The total size of all the blobs written in the test, in bytes.
  302. `total_throttled`::
  303. (string)
  304. The total time spent waiting due to the `max_snapshot_bytes_per_sec` throttle.
  305. `total_throttled_nanos`::
  306. (long)
  307. The total time spent waiting due to the `max_snapshot_bytes_per_sec` throttle,
  308. in nanoseconds.
  309. `total_elapsed`::
  310. (string)
  311. The total elapsed time spent on writing blobs in the test.
  312. `total_elapsed_nanos`::
  313. (long)
  314. The total elapsed time spent on writing blobs in the test, in nanoseconds.
  315. =====
  316. `read`::
  317. (object)
  318. A collection of statistics that summarise the results of the read operations in
  319. the test.
  320. +
  321. .Properties of `read`
  322. [%collapsible%open]
  323. =====
  324. `count`::
  325. (integer)
  326. The number of read operations performed in the test.
  327. `total_size`::
  328. (string)
  329. The total size of all the blobs or partial blobs read in the test.
  330. `total_size_bytes`::
  331. (long)
  332. The total size of all the blobs or partial blobs read in the test, in bytes.
  333. `total_wait`::
  334. (string)
  335. The total time spent waiting for the first byte of each read request to be
  336. received.
  337. `total_wait_nanos`::
  338. (long)
  339. The total time spent waiting for the first byte of each read request to be
  340. received, in nanoseconds.
  341. `max_wait`::
  342. (string)
  343. The maximum time spent waiting for the first byte of any read request to be
  344. received.
  345. `max_wait_nanos`::
  346. (long)
  347. The maximum time spent waiting for the first byte of any read request to be
  348. received, in nanoseconds.
  349. `total_throttled`::
  350. (string)
  351. The total time spent waiting due to the `max_restore_bytes_per_sec` or
  352. `indices.recovery.max_bytes_per_sec` throttles.
  353. `total_throttled_nanos`::
  354. (long)
  355. The total time spent waiting due to the `max_restore_bytes_per_sec` or
  356. `indices.recovery.max_bytes_per_sec` throttles, in nanoseconds.
  357. `total_elapsed`::
  358. (string)
  359. The total elapsed time spent on reading blobs in the test.
  360. `total_elapsed_nanos`::
  361. (long)
  362. The total elapsed time spent on reading blobs in the test, in nanoseconds.
  363. =====
  364. ====
  365. `details`::
  366. (array)
  367. A description of every read and write operation performed during the test. This
  368. is only returned if the `?detailed` request parameter is set to `true`.
  369. +
  370. .Properties of items within `details`
  371. [%collapsible]
  372. ====
  373. `blob`::
  374. (object)
  375. A description of the blob that was written and read.
  376. +
  377. .Properties of `blob`
  378. [%collapsible%open]
  379. =====
  380. `name`::
  381. (string)
  382. The name of the blob.
  383. `size`::
  384. (string)
  385. The size of the blob.
  386. `size_bytes`::
  387. (long)
  388. The size of the blob in bytes.
  389. `read_start`::
  390. (long)
  391. The position, in bytes, at which read operations started.
  392. `read_end`::
  393. (long)
  394. The position, in bytes, at which read operations completed.
  395. `read_early`::
  396. (boolean)
  397. Whether any read operations were started before the write operation completed.
  398. `overwritten`::
  399. (boolean)
  400. Whether the blob was overwritten while the read operations were ongoing.
  401. =====
  402. `writer_node`::
  403. (object)
  404. Identifies the node which wrote this blob and coordinated the read operations.
  405. +
  406. .Properties of `writer_node`
  407. [%collapsible%open]
  408. =====
  409. `id`::
  410. (string)
  411. The id of the writer node.
  412. `name`::
  413. (string)
  414. The name of the writer node
  415. =====
  416. `write_elapsed`::
  417. (string)
  418. The elapsed time spent writing this blob.
  419. `write_elapsed_nanos`::
  420. (long)
  421. The elapsed time spent writing this blob, in nanoseconds.
  422. `overwrite_elapsed`::
  423. (string)
  424. The elapsed time spent overwriting this blob. Omitted if the blob was not
  425. overwritten.
  426. `overwrite_elapsed_nanos`::
  427. (long)
  428. The elapsed time spent overwriting this blob, in nanoseconds. Omitted if the
  429. blob was not overwritten.
  430. `write_throttled`::
  431. (string)
  432. The length of time spent waiting for the `max_snapshot_bytes_per_sec` (or
  433. `indices.recovery.max_bytes_per_sec` if the
  434. <<recovery-settings-for-managed-services,recovery settings for managed services>>
  435. are set) throttle while writing this blob.
  436. `write_throttled_nanos`::
  437. (long)
  438. The length of time spent waiting for the `max_snapshot_bytes_per_sec` (or
  439. `indices.recovery.max_bytes_per_sec` if the
  440. <<recovery-settings-for-managed-services,recovery settings for managed services>>
  441. are set) throttle while writing this blob, in nanoseconds.
  442. `reads`::
  443. (array)
  444. A description of every read operation performed on this blob.
  445. +
  446. .Properties of items within `reads`
  447. [%collapsible%open]
  448. =====
  449. `node`::
  450. (object)
  451. Identifies the node which performed the read operation.
  452. +
  453. .Properties of `node`
  454. [%collapsible%open]
  455. ======
  456. `id`::
  457. (string)
  458. The id of the reader node.
  459. `name`::
  460. (string)
  461. The name of the reader node
  462. ======
  463. `before_write_complete`::
  464. (boolean)
  465. Whether the read operation may have started before the write operation was
  466. complete. Omitted if `false`.
  467. `found`::
  468. (boolean)
  469. Whether the blob was found by this read operation or not. May be `false` if the
  470. read was started before the write completed, or the write was aborted before
  471. completion.
  472. `first_byte_time`::
  473. (string)
  474. The length of time waiting for the first byte of the read operation to be
  475. received. Omitted if the blob was not found.
  476. `first_byte_time_nanos`::
  477. (long)
  478. The length of time waiting for the first byte of the read operation to be
  479. received, in nanoseconds. Omitted if the blob was not found.
  480. `elapsed`::
  481. (string)
  482. The length of time spent reading this blob. Omitted if the blob was not found.
  483. `elapsed_nanos`::
  484. (long)
  485. The length of time spent reading this blob, in nanoseconds. Omitted if the blob
  486. was not found.
  487. `throttled`::
  488. (string)
  489. The length of time spent waiting due to the `max_restore_bytes_per_sec` or
  490. `indices.recovery.max_bytes_per_sec` throttles during the read of this blob.
  491. Omitted if the blob was not found.
  492. `throttled_nanos`::
  493. (long)
  494. The length of time spent waiting due to the `max_restore_bytes_per_sec` or
  495. `indices.recovery.max_bytes_per_sec` throttles during the read of this blob, in
  496. nanoseconds. Omitted if the blob was not found.
  497. =====
  498. ====
  499. `listing_elapsed`::
  500. (string)
  501. The time it took to retrieve a list of all the blobs in the container.
  502. `listing_elapsed_nanos`::
  503. (long)
  504. The time it took to retrieve a list of all the blobs in the container, in
  505. nanoseconds.
  506. `delete_elapsed`::
  507. (string)
  508. The time it took to delete all the blobs in the container.
  509. `delete_elapsed_nanos`::
  510. (long)
  511. The time it took to delete all the blobs in the container, in nanoseconds.