123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170 |
- // tag::cloud[]
- Fixing the corrupted repository will entail making changes in multiple deployments
- that write to the same snapshot repository.
- Only one deployment must be writing to a repository. The deployment
- that will keep writing to the repository will be called the "primary" deployment (the current cluster),
- and the other one(s) where we'll mark the repository read-only as the "secondary"
- deployments.
- First mark the repository as read-only on the secondary deployments:
- **Use {kib}**
- //tag::kibana-api-ex[]
- . Log in to the {ess-console}[{ecloud} console].
- +
- . On the **Elasticsearch Service** panel, click the name of your deployment.
- +
- NOTE: If the name of your deployment is disabled your {kib} instances might be
- unhealthy, in which case please contact https://support.elastic.co[Elastic Support].
- If your deployment doesn't include {kib}, all you need to do is
- {cloud}/ec-access-kibana.html[enable it first].
- . Open your deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
- and go to **Stack Management > Snapshot and Restore > Repositories**.
- +
- [role="screenshot"]
- image::images/repositories.png[{kib} Console,align="center"]
- . The repositories table should now be visible. Click on the pencil icon at the
- right side of the repository to be marked as read-only. On the Edit page that opened
- scroll down and check "Read-only repository". Click "Save".
- Alternatively if deleting the repository altogether is preferable, select the checkbox
- at the left of the repository name in the repositories table and click the
- "Remove repository" red button at the top left of the table.
- At this point, it's only the primary (current) deployment that has the repository marked
- as writeable.
- {es} sees it as corrupt, so the repository needs to be removed and added back so that
- {es} can resume using it:
- Note that we're now configuring the primary (current) deployment.
- . Open the primary deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
- and go to **Stack Management > Snapshot and Restore > Repositories**.
- +
- [role="screenshot"]
- image::images/repositories.png[{kib} Console,align="center"]
- . Click on the pencil icon at the right side of the repository. On the Edit page that opened
- scroll down and click "Save", without making any changes to the existing settings.
- //end::kibana-api-ex[]
- // end::cloud[]
- // tag::self-managed[]
- Fixing the corrupted repository will entail making changes in multiple clusters
- that write to the same snapshot repository.
- Only one cluster must be writing to a repository. Let's call the cluster
- we want to keep writing to the repository the "primary" cluster (the current cluster),
- and the other one(s) where we'll mark the repository as read-only the "secondary"
- clusters.
- Let's first work on the secondary clusters:
- . Get the configuration of the repository:
- +
- [source,console]
- ----
- GET _snapshot/my-repo
- ----
- // TEST[skip:we're not setting up repos in these tests]
- +
- The response will look like this:
- +
- [source,console-result]
- ----
- {
- "my-repo": { <1>
- "type": "s3",
- "settings": {
- "bucket": "repo-bucket",
- "client": "elastic-internal-71bcd3",
- "base_path": "myrepo"
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- +
- <1> Represents the current configuration for the repository.
- . Using the settings retrieved above, add the `readonly: true` option to mark
- it as read-only:
- +
- [source,console]
- ----
- PUT _snapshot/my-repo
- {
- "type": "s3",
- "settings": {
- "bucket": "repo-bucket",
- "client": "elastic-internal-71bcd3",
- "base_path": "myrepo",
- "readonly": true <1>
- }
- }
- ----
- // TEST[skip:we're not setting up repos in these tests]
- +
- <1> Marks the repository as read-only.
- . Alternatively, deleting the repository is an option using:
- +
- [source,console]
- ----
- DELETE _snapshot/my-repo
- ----
- // TEST[skip:we're not setting up repos in these tests]
- +
- The response will look like this:
- +
- [source,console-result]
- ------------------------------------------------------------------------------
- {
- "acknowledged": true
- }
- ------------------------------------------------------------------------------
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- At this point, it's only the primary (current) cluster that has the repository marked
- as writeable.
- {es} sees it as corrupt though so let's recreate it so that {es} can resume using it.
- Note that now we're configuring the primary (current) cluster:
- . Get the configuration of the repository and save its configuration as we'll use it
- to recreate the repository:
- +
- [source,console]
- ----
- GET _snapshot/my-repo
- ----
- // TEST[skip:we're not setting up repos in these tests]
- . Using the configuration we obtained above, let's recreate the repository:
- +
- [source,console]
- ----
- PUT _snapshot/my-repo
- {
- "type": "s3",
- "settings": {
- "bucket": "repo-bucket",
- "client": "elastic-internal-71bcd3",
- "base_path": "myrepo"
- }
- }
- ----
- // TEST[skip:we're not setting up repos in these tests]
- +
- The response will look like this:
- +
- [source,console-result]
- ------------------------------------------------------------------------------
- {
- "acknowledged": true
- }
- ------------------------------------------------------------------------------
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- // end::self-managed[]
|