| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479 | [role="xpack"][[repo-analysis-api]]=== Repository analysis API++++<titleabbrev>Repository analysis</titleabbrev>++++Analyzes a repository, reporting its performance characteristics and anyincorrect behaviour found.////[source,console]----PUT /_snapshot/my_repository{  "type": "fs",  "settings": {    "location": "my_backup_location"  }}----// TESTSETUP////[source,console]----POST /_snapshot/my_repository/_analyze?blob_count=10&max_blob_size=1mb&timeout=120s----[[repo-analysis-api-request]]==== {api-request-title}`POST /_snapshot/<repository>/_analyze`[[repo-analysis-api-desc]]==== {api-description-title}There are a large number of third-party storage systems available, not all ofwhich are suitable for use as a snapshot repository by {es}. Some storagesystems behave incorrectly, or perform poorly, especially when accessedconcurrently by multiple clients as the nodes of an {es} cluster do.The Repository analysis API performs a collection of read and write operationson your repository which are designed to detect incorrect behaviour and tomeasure the performance characteristics of your storage system.The default values for the parameters to this API are deliberately low toreduce the impact of running an analysis. A realistic experiment should set`blob_count` to at least `2000`, `max_blob_size` to at least `2gb`, and`max_total_data_size` to at least `1tb`, and will almost certainly need toincrease the `timeout` to allow time for the process to complete successfully.You should run the analysis on a multi-node cluster of a similar size to yourproduction cluster so that it can detect any problems that only arise when therepository is accessed by many nodes at once.If the analysis is successful this API returns details of the testing process,optionally including how long each operation took. You can use this informationto determine the performance of your storage system. If any operation fails orreturns an incorrect result, this API returns an error. If the API returns anerror then it may not have removed all the data it wrote to the repository. Theerror will indicate the location of any leftover data, and this path is alsorecorded in the {es} logs. You should verify yourself that this location hasbeen cleaned up correctly. If there is still leftover data at the specifiedlocation then you should manually remove it.If the connection from your client to {es} is closed while the client iswaiting for the result of the analysis then the test is cancelled. Some clientsare configured to close their connection if no response is received within acertain timeout. An analysis takes a long time to complete so you may need torelax any such client-side timeouts. On cancellation the analysis attempts toclean up the data it was writing, but it may not be able to remove it all. Thepath to the leftover data is recorded in the {es} logs. You should verifyyourself that this location has been cleaned up correctly. If there is stillleftover data at the specified location then you should manually remove it.If the analysis is successful then it detected no incorrect behaviour, but thisdoes not mean that correct behaviour is guaranteed. The analysis attempts todetect common bugs but it certainly does not offer 100% coverage. Additionally,it does not test the following:- Your repository must perform durable writes. Once a blob has been written it  must remain in place until it is deleted, even after a power loss or similar  disaster.- Your repository must not suffer from silent data corruption. Once a blob has  been written its contents must remain unchanged until it is deliberately  modified or deleted.- Your repository must behave correctly even if connectivity from the cluster  is disrupted. Reads and writes may fail in this case, but they must not return  incorrect results.IMPORTANT: An analysis writes a substantial amount of data to your repositoryand then reads it back again. This consumes bandwidth on the network betweenthe cluster and the repository, and storage space and IO bandwidth on therepository itself. You must ensure this load does not affect other users ofthese systems. Analyses respect the repository settings`max_snapshot_bytes_per_sec` and `max_restore_bytes_per_sec` if available, andthe cluster setting `indices.recovery.max_bytes_per_sec` which you can use tolimit the bandwidth they consume.NOTE: This API is intended for exploratory use by humans. You should expect therequest parameters and the response format to vary in future versions.NOTE: This API may not work correctly in a mixed-version cluster.==== Implementation detailsNOTE: This section of documentation describes how the Repository analysis APIworks in this version of {es}, but you should expect the implementation to varybetween versions. The request parameters and response format depend on detailsof the implementation so may also be different in newer versions.The analysis comprises a number of blob-level tasks, as set by the `blob_count`parameter. The blob-level tasks are distributed over the data andmaster-eligible nodes in the cluster for execution.For most blob-level tasks, the executing node first writes a blob to therepository, and then instructs some of the other nodes in the cluster toattempt to read the data it just wrote. The size of the blob is chosenrandomly, according to the `max_blob_size` and `max_total_data_size`parameters. If any of these reads fails then the repository does not implementthe necessary read-after-write semantics that {es} requires.For some blob-level tasks, the executing node will instruct some of its peersto attempt to read the data before the writing process completes. These readsare permitted to fail, but must not return partial data. If any read returnspartial data then the repository does not implement the necessary atomicitysemantics that {es} requires.For some blob-level tasks, the executing node will overwrite the blob while itspeers are reading it. In this case the data read may come from either theoriginal or the overwritten blob, but must not return partial data or a mix ofdata from the two blobs. If any of these reads returns partial data or a mix ofthe two blobs then the repository does not implement the necessary atomicitysemantics that {es} requires for overwrites.The executing node will use a variety of different methods to write the blob.For instance, where applicable, it will use both single-part and multi-partuploads. Similarly, the reading nodes will use a variety of different methodsto read the data back again. For instance they may read the entire blob fromstart to end, or may read only a subset of the data.[[repo-analysis-api-path-params]]==== {api-path-parms-title}`<repository>`::(Required, string)Name of the snapshot repository to test.[[repo-analysis-api-query-params]]==== {api-query-parms-title}`blob_count`::(Optional, integer) The total number of blobs to write to the repository duringthe test. Defaults to `100`. For realistic experiments you should set this toat least `2000`.`max_blob_size`::(Optional, <<size-units, size units>>) The maximum size of a blob to be writtenduring the test. Defaults to `10mb`. For realistic experiments you should setthis to at least `2gb`.`max_total_data_size`::(Optional, <<size-units, size units>>) An upper limit on the total size of allthe blobs written during the test. Defaults to `1gb`. For realistic experimentsyou should set this to at least `1tb`.`timeout`::(Optional, <<time-units, time units>>) Specifies the period of time to wait forthe test to complete. If no response is received before the timeout expires,the test is cancelled and returns an error. Defaults to `30s`.===== Advanced query parametersThe following parameters allow additional control over the analysis, but youwill usually not need to adjust them.`concurrency`::(Optional, integer) The number of write operations to perform concurrently.Defaults to `10`.`read_node_count`::(Optional, integer) The number of nodes on which to perform a read operationafter writing each blob.  Defaults to `10`.`early_read_node_count`::(Optional, integer) The number of nodes on which to perform an early readoperation while writing each blob. Defaults to `2`. Early read operations areonly rarely performed.`rare_action_probability`::(Optional, double) The probability of performing a rare action (an early reador an overwrite) on each blob. Defaults to `0.02`.`seed`::(Optional, integer) The seed for the pseudo-random number generator used togenerate the list of operations performed during the test. To repeat the sameset of operations in multiple experiments, use the same seed in eachexperiment. Note that the operations are performed concurrently so may notalways happen in the same order on each run.`detailed`::(Optional, boolean) Whether to return detailed results, including timinginformation for every operation performed during the analysis. Defaults to`false`, meaning to return only a summary of the analysis.[role="child_attributes"][[repo-analysis-api-response-body]]==== {api-response-body-title}The response exposes implementation details of the analysis which may changefrom version to version. The response body format is therefore not consideredstable and may be different in newer versions.`coordinating_node`::(object)Identifies the node which coordinated the analysis and performed the final cleanup.+.Properties of `coordinating_node`[%collapsible%open]====`id`::(string)The id of the coordinating node.`name`::(string)The name of the coordinating node====`repository`::(string)The name of the repository that was the subject of the analysis.`blob_count`::(integer)The number of blobs written to the repository during the test, equal to the`?blob_count` request parameter.`concurrency`::(integer)The number of write operations performed concurrently during the test, equal tothe `?concurrency` request parameter.`read_node_count`::(integer)The limit on the number of nodes on which read operations were performed afterwriting each blob, equal to the `?read_node_count` request parameter.`early_read_node_count`::(integer)The limit on the number of nodes on which early read operations were performedafter writing each blob, equal to the `?early_read_node_count` requestparameter.`max_blob_size`::(string)The limit on the size of a blob written during the test, equal to the`?max_blob_size` parameter.`max_total_data_size`::(string)The limit on the total size of all blob written during the test, equal to the`?max_total_data_size` parameter.`seed`::(long)The seed for the pseudo-random number generator used to generate the operationsused during the test. Equal to the `?seed` request parameter if set.`rare_action_probability`::(double)The probability of performing rare actions during the test. Equal to the`?rare_action_probability` request parameter.`blob_path`::(string)The path in the repository under which all the blobs were written during thetest.`issues_detected`::(list)A list of correctness issues detected, which will be empty if the APIsucceeded. Included to emphasize that a successful response does not guaranteecorrect behaviour in future.`summary`::(object)A collection of statistics that summarise the results of the test.+.Properties of `summary`[%collapsible%open]====`write`::(object)A collection of statistics that summarise the results of the write operationsin the test.+.Properties of `write`[%collapsible%open]=====`count`::(integer)The numer of write operations performed in the test.`total_bytes`::(integer)The total size of all the blobs written in the test, in bytes.`total_throttled_nanos`::(integer)The total time spent waiting due to the `max_snapshot_bytes_per_sec` throttle,in nanoseconds.`total_elapsed_nanos`::(integer)The total elapsed time spent on writing blobs in the test, in nanoseconds.=====`read`::(object)A collection of statistics that summarise the results of the read operations inthe test.+.Properties of `read`[%collapsible%open]=====`count`::(integer)The numer of read operations performed in the test.`total_bytes`::(integer)The total size of all the blobs or partial blobs read in the test, in bytes.`total_throttled_nanos`::(integer)The total time spent waiting due to the `max_restore_bytes_per_sec` or`indices.recovery.max_bytes_per_sec` throttles, in nanoseconds.`total_wait_nanos`::(integer)The total time spent waiting for the first byte of each read request to bereceived, in nanoseconds.`max_wait_nanos`::(integer)The maximum time spent waiting for the first byte of any read request to bereceived, in nanoseconds.`total_elapsed_nanos`::(integer)The total elapsed time spent on reading blobs in the test, in nanoseconds.=========`details`::(array)A description of every read and write operation performed during the test. This isonly returned if the `?detailed` request parameter is set to `true`.+.Properties of items within `details`[%collapsible]====`blob`::(object)A description of the blob that was written and read.+.Properties of `blob`[%collapsible%open]=====`name`::(string)The name of the blob.`size`::(long)The size of the blob in bytes.`read_start`::(long)The position, in bytes, at which read operations started.`read_end`::(long)The position, in bytes, at which read operations completed.`read_early`::(boolean)Whether any read operations were started before the write operation completed.`overwritten`::(boolean)Whether the blob was overwritten while the read operations were ongoing.=====`writer_node`::(object)Identifies the node which wrote this blob and coordinated the read operations.+.Properties of `writer_node`[%collapsible%open]=====`id`::(string)The id of the writer node.`name`::(string)The name of the writer node=====`write_elapsed_nanos`::(object)The elapsed time spent writing this blob, in nanoseconds.`overwrite_elapsed_nanos`::(object)The elapsed time spent overwriting this blob, in nanoseconds. Omitted if theblob was not overwritten.`write_throttled_nanos`::(object)The length of time spent waiting for the `max_snapshot_bytes_per_sec` throttlewhile writing this blob, in nanoseconds.`reads`::(array)A description of every read operation performed on this blob.+.Properties of items within `reads`[%collapsible%open]=====`node`::(object)Identifies the node which performed the read operation.+.Properties of `node`[%collapsible%open]======`id`::(string)The id of the reader node.`name`::(string)The name of the reader node======`before_write_complete`::(boolean)Whether the read operation may have started before the write operation wascomplete. Omitted if `false`.`found`::(boolean)Whether the blob was found by this read operation or not. May be `false` if theread was started before the write completed.`first_byte_nanos`::(boolean)The length of time waiting for the first byte of the read operation to bereceived, in nanoseconds. Omitted if the blob was not found.`elapsed_nanos`::(boolean)The length of time spent reading this blob, in nanoseconds. Omitted if the blobwas not found.`throttled_nanos`::(boolean)The length of time time spent waiting due to the `max_restore_bytes_per_sec` or`indices.recovery.max_bytes_per_sec` throttles during the read of this blob, innanoseconds. Omitted if the blob was not found.=========
 |