repository-gcs.asciidoc 7.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219
  1. [[repository-gcs]]
  2. === Google Cloud Storage Repository Plugin
  3. The GCS repository plugin adds support for using the https://cloud.google.com/storage/[Google Cloud Storage]
  4. service as a repository for {ref}/modules-snapshots.html[Snapshot/Restore].
  5. [[repository-gcs-install]]
  6. [float]
  7. ==== Installation
  8. This plugin can be installed using the plugin manager:
  9. [source,sh]
  10. ----------------------------------------------------------------
  11. sudo bin/elasticsearch-plugin install repository-gcs
  12. ----------------------------------------------------------------
  13. NOTE: The plugin requires new permission to be installed in order to work
  14. The plugin must be installed on every node in the cluster, and each node must
  15. be restarted after installation.
  16. [[repository-gcs-remove]]
  17. [float]
  18. ==== Removal
  19. The plugin can be removed with the following command:
  20. [source,sh]
  21. ----------------------------------------------------------------
  22. sudo bin/elasticsearch-plugin remove repository-gcs
  23. ----------------------------------------------------------------
  24. The node must be stopped before removing the plugin.
  25. [[repository-gcs-usage]]
  26. ==== Getting started
  27. The plugin uses the https://cloud.google.com/storage/docs/json_api/[Google Cloud Storage JSON API] (v1)
  28. to connect to the Storage service. If this is the first time you use Google Cloud Storage, you first
  29. need to connect to the https://console.cloud.google.com/[Google Cloud Platform Console] and create a new
  30. project. Once your project is created, you must enable the Cloud Storage Service for your project.
  31. [[repository-gcs-creating-bucket]]
  32. ===== Creating a Bucket
  33. Google Cloud Storage service uses the concept of https://cloud.google.com/storage/docs/key-terms[Bucket]
  34. as a container for all the data. Buckets are usually created using the
  35. https://console.cloud.google.com/[Google Cloud Platform Console]. The plugin will not automatically
  36. create buckets.
  37. To create a new bucket:
  38. 1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]
  39. 2. Select your project
  40. 3. Got to the https://console.cloud.google.com/storage/browser[Storage Browser]
  41. 4. Click the "Create Bucket" button
  42. 5. Enter a the name of the new bucket
  43. 6. Select a storage class
  44. 7. Select a location
  45. 8. Click the "Create" button
  46. The bucket should now be created.
  47. [[repository-gcs-service-authentication]]
  48. ===== Service Authentication
  49. The plugin supports two authentication modes:
  50. * the built-in <<repository-gcs-using-compute-engine, Compute Engine authentication>>. This mode is
  51. recommended if your elasticsearch node is running on a Compute Engine virtual machine.
  52. * the <<repository-gcs-using-service-account, Service Account>> authentication mode.
  53. [[repository-gcs-using-compute-engine]]
  54. ===== Using Compute Engine
  55. When running on Compute Engine, the plugin use the Google's built-in authentication mechanism to
  56. authenticate on the Storage service. Compute Engine virtual machines are usually associated to a
  57. default service account. This service account can be found in the VM instance details in the
  58. https://console.cloud.google.com/compute/[Compute Engine console].
  59. To indicate that a repository should use the built-in authentication,
  60. the repository `service_account` setting must be set to `_default_`:
  61. [source,js]
  62. ----
  63. PUT _snapshot/my_gcs_repository_on_compute_engine
  64. {
  65. "type": "gcs",
  66. "settings": {
  67. "bucket": "my_bucket",
  68. "service_account": "_default_"
  69. }
  70. }
  71. ----
  72. // CONSOLE
  73. // TEST[skip:we don't have gcs setup while testing this]
  74. NOTE: The Compute Engine VM must be allowed to use the Storage service. This can be done only at VM
  75. creation time, when "Storage" access can be configured to "Read/Write" permission. Check your
  76. instance details at the section "Cloud API access scopes".
  77. [[repository-gcs-using-service-account]]
  78. ===== Using a Service Account
  79. If your elasticsearch node is not running on Compute Engine, or if you don't want to use Google
  80. built-in authentication mechanism, you can authenticate on the Storage service using a
  81. https://cloud.google.com/iam/docs/overview#service_account[Service Account] file.
  82. To create a service account file:
  83. 1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]
  84. 2. Select your project
  85. 3. Got to the https://console.cloud.google.com/permissions[Permission] tab
  86. 4. Select the https://console.cloud.google.com/permissions/serviceaccounts[Service Accounts] tab
  87. 5. Click on "Create service account"
  88. 6. Once created, select the new service account and download a JSON key file
  89. A service account file looks like this:
  90. [source,js]
  91. ----
  92. {
  93. "type": "service_account",
  94. "project_id": "your-project-id",
  95. "private_key_id": "...",
  96. "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
  97. "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com",
  98. "client_id": "...",
  99. "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  100. "token_uri": "https://accounts.google.com/o/oauth2/token",
  101. "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  102. "client_x509_cert_url": "..."
  103. }
  104. ----
  105. // NOTCONSOLE
  106. This file must be copied in the `config` directory of the elasticsearch installation and on
  107. every node of the cluster.
  108. To indicate that a repository should use a service account file:
  109. [source,js]
  110. ----
  111. PUT _snapshot/my_gcs_repository
  112. {
  113. "type": "gcs",
  114. "settings": {
  115. "bucket": "my_bucket",
  116. "service_account": "service_account.json"
  117. }
  118. }
  119. ----
  120. // CONSOLE
  121. // TEST[skip:we don't have gcs setup while testing this]
  122. [[repository-gcs-bucket-permission]]
  123. ===== Set Bucket Permission
  124. The service account used to access the bucket must have the "Writer" access to the bucket:
  125. 1. Connect to the https://console.cloud.google.com/[Google Cloud Platform Console]
  126. 2. Select your project
  127. 3. Got to the https://console.cloud.google.com/storage/browser[Storage Browser]
  128. 4. Select the bucket and "Edit bucket permission"
  129. 5. The service account must be configured as a "User" with "Writer" access
  130. [[repository-gcs-repository]]
  131. ==== Create a Repository
  132. Once everything is installed and every node is started, you can create a new repository that
  133. uses Google Cloud Storage to store snapshots:
  134. [source,js]
  135. ----
  136. PUT _snapshot/my_gcs_repository
  137. {
  138. "type": "gcs",
  139. "settings": {
  140. "bucket": "my_bucket",
  141. "service_account": "service_account.json"
  142. }
  143. }
  144. ----
  145. // CONSOLE
  146. // TEST[skip:we don't have gcs setup while testing this]
  147. The following settings are supported:
  148. `bucket`::
  149. The name of the bucket to be used for snapshots. (Mandatory)
  150. `service_account`::
  151. The service account to use. It can be a relative path to a service account JSON file
  152. or the value `_default_` that indicate to use built-in Compute Engine service account.
  153. `base_path`::
  154. Specifies the path within bucket to repository data. Defaults to
  155. the root of the bucket.
  156. `chunk_size`::
  157. Big files can be broken down into chunks during snapshotting if needed.
  158. The chunk size can be specified in bytes or by using size value notation,
  159. i.e. `1g`, `10m`, `5k`. Defaults to `100m`.
  160. `compress`::
  161. When set to `true` metadata files are stored in compressed format. This
  162. setting doesn't affect index files that are already compressed by default.
  163. Defaults to `false`.
  164. `application_name`::
  165. Name used by the plugin when it uses the Google Cloud JSON API. Setting
  166. a custom name can be useful to authenticate your cluster when requests
  167. statistics are logged in the Google Cloud Platform. Default to `repository-gcs`