repository-shared-file-system.asciidoc 4.0 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586
  1. [[snapshots-filesystem-repository]]
  2. === Shared file system repository
  3. include::{es-repo-dir}/snapshot-restore/on-prem-repo-type.asciidoc[]
  4. Use a shared file system repository to store snapshots on a shared file system.
  5. To register a shared file system repository, first mount the file system to the
  6. same location on all master and data nodes. Then add the file system's path or
  7. parent directory to the `path.repo` setting in `elasticsearch.yml` for each
  8. master and data node. For running clusters, this requires a
  9. <<restart-cluster-rolling,rolling restart>> of each node.
  10. Supported `path.repo` values vary by platform:
  11. include::{es-repo-dir}/tab-widgets/register-fs-repo-widget.asciidoc[]
  12. [[filesystem-repository-settings]]
  13. ==== Repository settings
  14. `chunk_size`::
  15. (Optional, <<byte-units,byte value>>)
  16. Maximum size of files in snapshots. In snapshots, files larger than this are
  17. broken down into chunks of this size or smaller. Defaults to `null` (unlimited
  18. file size).
  19. `compress`::
  20. (Optional, Boolean)
  21. If `true`, metadata files, such as index mappings and settings, are compressed
  22. in snapshots. Data files are not compressed. Defaults to `true`.
  23. `location`::
  24. (Required, string)
  25. Location of the shared filesystem used to store and retrieve snapshots. This
  26. location must be registered in the `path.repo` setting on all master and data
  27. nodes in the cluster.
  28. `max_number_of_snapshots`::
  29. (Optional, integer)
  30. Maximum number of snapshots the repository can contain.
  31. Defaults to `Integer.MAX_VALUE`, which is `2^31-1` or `2147483647`.
  32. include::repository-shared-settings.asciidoc[]
  33. ==== Troubleshooting a shared file system repository
  34. {es} interacts with a shared file system repository using the file system
  35. abstraction in your operating system. This means that every {es} node must be
  36. able to perform operations within the repository path such as creating,
  37. opening, and renaming files, and creating and listing directories, and
  38. operations performed by one node must be visible to other nodes as soon as they
  39. complete.
  40. Check for common misconfigurations using the <<verify-snapshot-repo-api>> API
  41. and the <<repo-analysis-api>> API. When the repository is properly configured,
  42. these APIs will complete successfully. If the verify repository or repository
  43. analysis APIs report a problem then you will be able to reproduce this problem
  44. outside {es} by performing similar operations on the file system directly.
  45. If the verify repository or repository analysis APIs fail with an error
  46. indicating insufficient permissions then adjust the configuration of the
  47. repository within your operating system to give {es} an appropriate level of
  48. access. To reproduce such problems directly, perform the same operations as
  49. {es} in the same security context as the one in which {es} is running. For
  50. example, on Linux, use a command such as `su` to switch to the user as which
  51. {es} runs.
  52. If the verify repository or repository analysis APIs fail with an error
  53. indicating that operations on one node are not immediately visible on another
  54. node then adjust the configuration of the repository within your operating
  55. system to address this problem. If your repository cannot be configured with
  56. strong enough visibility guarantees then it is not suitable for use as an {es}
  57. snapshot repository.
  58. The verify repository and repository analysis APIs will also fail if the
  59. operating system returns any other kind of I/O error when accessing the
  60. repository. If this happens, address the cause of the I/O error reported by the
  61. operating system.
  62. TIP: Many NFS implementations match accounts across nodes using their _numeric_
  63. user IDs (UIDs) and group IDs (GIDs) rather than their names. It is possible
  64. for {es} to run under an account with the same name (often `elasticsearch`) on
  65. each node, but for these accounts to have different numeric user or group IDs.
  66. If your shared file system uses NFS then ensure that every node is running with
  67. the same numeric UID and GID, or else update your NFS configuration to account
  68. for the variance in numeric IDs across nodes.