repository-hdfs.asciidoc 8.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194
  1. [[repository-hdfs]]
  2. === Hadoop HDFS repository plugin
  3. The HDFS repository plugin adds support for using HDFS File System as a repository for
  4. {ref}/snapshot-restore.html[Snapshot/Restore].
  5. :plugin_name: repository-hdfs
  6. include::install_remove.asciidoc[]
  7. [[repository-hdfs-usage]]
  8. ==== Getting started with HDFS
  9. The HDFS snapshot/restore plugin is built against the latest Apache Hadoop 2.x (currently 2.7.1). If the distro you are using is not protocol
  10. compatible with Apache Hadoop, consider replacing the Hadoop libraries inside the plugin folder with your own (you might have to adjust the security permissions required).
  11. Even if Hadoop is already installed on the Elasticsearch nodes, for security reasons, the required libraries need to be placed under the plugin folder. Note that in most cases, if the distro is compatible, one simply needs to configure the repository with the appropriate Hadoop configuration files (see below).
  12. Windows Users::
  13. Using Apache Hadoop on Windows is problematic and thus it is not recommended. For those _really_ wanting to use it, make sure you place the elusive `winutils.exe` under the
  14. plugin folder and point `HADOOP_HOME` variable to it; this should minimize the amount of permissions Hadoop requires (though one would still have to add some more).
  15. [[repository-hdfs-config]]
  16. ==== Configuration properties
  17. Once installed, define the configuration for the `hdfs` repository through the
  18. {ref}/snapshot-restore.html[REST API]:
  19. [source,console]
  20. ----
  21. PUT _snapshot/my_hdfs_repository
  22. {
  23. "type": "hdfs",
  24. "settings": {
  25. "uri": "hdfs://namenode:8020/",
  26. "path": "elasticsearch/repositories/my_hdfs_repository",
  27. "conf.dfs.client.read.shortcircuit": "true"
  28. }
  29. }
  30. ----
  31. // TEST[skip:we don't have hdfs set up while testing this]
  32. The following settings are supported:
  33. [horizontal]
  34. `uri`::
  35. The uri address for hdfs. ex: "hdfs://<host>:<port>/". (Required)
  36. `path`::
  37. The file path within the filesystem where data is stored/loaded. ex: "path/to/file". (Required)
  38. `load_defaults`::
  39. Whether to load the default Hadoop configuration or not. (Enabled by default)
  40. `conf.<key>`::
  41. Inlined configuration parameter to be added to Hadoop configuration. (Optional)
  42. Only client oriented properties from the hadoop https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml[core] and https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml[hdfs] configuration files will be recognized by the plugin.
  43. `compress`::
  44. Whether to compress the metadata or not. (Enabled by default)
  45. include::repository-shared-settings.asciidoc[]
  46. `chunk_size`::
  47. Override the chunk size. (Disabled by default)
  48. `security.principal`::
  49. Kerberos principal to use when connecting to a secured HDFS cluster.
  50. If you are using a service principal for your elasticsearch node, you may
  51. use the `_HOST` pattern in the principal name and the plugin will replace
  52. the pattern with the hostname of the node at runtime (see
  53. link:repository-hdfs-security-runtime[Creating the Secure Repository]).
  54. `replication_factor`::
  55. The replication factor for all new HDFS files created by this repository.
  56. Must be greater or equal to `dfs.replication.min` and less or equal to `dfs.replication.max` HDFS option.
  57. Defaults to using HDFS cluster setting.
  58. [[repository-hdfs-availability]]
  59. [discrete]
  60. ===== A note on HDFS availability
  61. When you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will
  62. attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then
  63. all nodes in the cluster must be able to reach HDFS when starting. If not, then the node will fail to initialize the
  64. repository at start up and the repository will be unusable. If this happens, you will need to remove and re-add the
  65. repository or restart the offending node.
  66. [[repository-hdfs-security]]
  67. ==== Hadoop security
  68. The HDFS repository plugin integrates seamlessly with Hadoop's authentication model. The following authentication
  69. methods are supported by the plugin:
  70. [horizontal]
  71. `simple`::
  72. Also means "no security" and is enabled by default. Uses information from underlying operating system account
  73. running Elasticsearch to inform Hadoop of the name of the current user. Hadoop makes no attempts to verify this
  74. information.
  75. `kerberos`::
  76. Authenticates to Hadoop through the usage of a Kerberos principal and keytab. Interfacing with HDFS clusters
  77. secured with Kerberos requires a few additional steps to enable (See <<repository-hdfs-security-keytabs>> and
  78. <<repository-hdfs-security-runtime>> for more info)
  79. [[repository-hdfs-security-keytabs]]
  80. [discrete]
  81. ===== Principals and keytabs
  82. Before attempting to connect to a secured HDFS cluster, provision the Kerberos principals and keytabs that the
  83. Elasticsearch nodes will use for authenticating to Kerberos. For maximum security and to avoid tripping up the Kerberos
  84. replay protection, you should create a service principal per node, following the pattern of
  85. `elasticsearch/hostname@REALM`.
  86. WARNING: In some cases, if the same principal is authenticating from multiple clients at once, services may reject
  87. authentication for those principals under the assumption that they could be replay attacks. If you are running the
  88. plugin in production with multiple nodes you should be using a unique service principal for each node.
  89. On each Elasticsearch node, place the appropriate keytab file in the node's configuration location under the
  90. `repository-hdfs` directory using the name `krb5.keytab`:
  91. [source, bash]
  92. ----
  93. $> cd elasticsearch/config
  94. $> ls
  95. elasticsearch.yml jvm.options log4j2.properties repository-hdfs/ scripts/
  96. $> cd repository-hdfs
  97. $> ls
  98. krb5.keytab
  99. ----
  100. // TEST[skip:this is for demonstration purposes only
  101. NOTE: Make sure you have the correct keytabs! If you are using a service principal per node (like
  102. `elasticsearch/hostname@REALM`) then each node will need its own unique keytab file for the principal assigned to that
  103. host!
  104. // Setup at runtime (principal name)
  105. [[repository-hdfs-security-runtime]]
  106. [discrete]
  107. ===== Creating the secure repository
  108. Once your keytab files are in place and your cluster is started, creating a secured HDFS repository is simple. Just
  109. add the name of the principal that you will be authenticating as in the repository settings under the
  110. `security.principal` option:
  111. [source,console]
  112. ----
  113. PUT _snapshot/my_hdfs_repository
  114. {
  115. "type": "hdfs",
  116. "settings": {
  117. "uri": "hdfs://namenode:8020/",
  118. "path": "/user/elasticsearch/repositories/my_hdfs_repository",
  119. "security.principal": "elasticsearch@REALM"
  120. }
  121. }
  122. ----
  123. // TEST[skip:we don't have hdfs set up while testing this]
  124. If you are using different service principals for each node, you can use the `_HOST` pattern in your principal
  125. name. Elasticsearch will automatically replace the pattern with the hostname of the node at runtime:
  126. [source,console]
  127. ----
  128. PUT _snapshot/my_hdfs_repository
  129. {
  130. "type": "hdfs",
  131. "settings": {
  132. "uri": "hdfs://namenode:8020/",
  133. "path": "/user/elasticsearch/repositories/my_hdfs_repository",
  134. "security.principal": "elasticsearch/_HOST@REALM"
  135. }
  136. }
  137. ----
  138. // TEST[skip:we don't have hdfs set up while testing this]
  139. [[repository-hdfs-security-authorization]]
  140. [discrete]
  141. ===== Authorization
  142. Once Elasticsearch is connected and authenticated to HDFS, HDFS will infer a username to use for
  143. authorizing file access for the client. By default, it picks this username from the primary part of
  144. the kerberos principal used to authenticate to the service. For example, in the case of a principal
  145. like `elasticsearch@REALM` or `elasticsearch/hostname@REALM` then the username that HDFS
  146. extracts for file access checks will be `elasticsearch`.
  147. NOTE: The repository plugin makes no assumptions of what Elasticsearch's principal name is. The main fragment of the
  148. Kerberos principal is not required to be `elasticsearch`. If you have a principal or service name that works better
  149. for you or your organization then feel free to use it instead!