configuration.asciidoc 6.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182
  1. [[settings]]
  2. == Configuring Elasticsearch
  3. Elasticsearch ships with good defaults and requires very little configuration.
  4. Most settings can be changed on a running cluster using the
  5. <<cluster-update-settings>> API.
  6. The configuration files should contain settings which are node-specific (such
  7. as `node.name` and paths), or settings which a node requires in order to be
  8. able to join a cluster, such as `cluster.name` and `network.host`.
  9. [float]
  10. === Config file location
  11. Elasticsearch has two configuration files:
  12. * `elasticsearch.yml` for configuring Elasticsearch, and
  13. * `log4j2.properties` for configuring Elasticsearch logging.
  14. These files are located in the config directory, whose location defaults to
  15. `$ES_HOME/config/`. The Debian and RPM packages set the config directory
  16. location to `/etc/elasticsearch/`.
  17. The location of the config directory can be changed with the `path.conf`
  18. setting, as follows:
  19. [source,sh]
  20. -------------------------------
  21. ./bin/elasticsearch -Epath.conf=/path/to/my/config/
  22. -------------------------------
  23. [float]
  24. === Config file format
  25. The configuration format is http://www.yaml.org/[YAML]. Here is an
  26. example of changing the path of the data and logs directories:
  27. [source,yaml]
  28. --------------------------------------------------
  29. path:
  30. data: /var/lib/elasticsearch
  31. logs: /var/log/elasticsearch
  32. --------------------------------------------------
  33. Settings can also be flattened as follows:
  34. [source,yaml]
  35. --------------------------------------------------
  36. path.data: /var/lib/elasticsearch
  37. path.logs: /var/log/elasticsearch
  38. --------------------------------------------------
  39. [float]
  40. === Environment variable subsitution
  41. Environment variables referenced with the `${...}` notation within the
  42. configuration file will be replaced with the value of the environment
  43. variable, for instance:
  44. [source,yaml]
  45. --------------------------------------------------
  46. node.name: ${HOSTNAME}
  47. network.host: ${ES_NETWORK_HOST}
  48. --------------------------------------------------
  49. [float]
  50. === Prompting for settings
  51. For settings that you do not wish to store in the configuration file, you can
  52. use the value `${prompt.text}` or `${prompt.secret}` and start Elasticsearch
  53. in the foreground. `${prompt.secret}` has echoing disabled so that the value
  54. entered will not be shown in your terminal; `${prompt.text}` will allow you to
  55. see the value as you type it in. For example:
  56. [source,yaml]
  57. --------------------------------------------------
  58. node:
  59. name: ${prompt.text}
  60. --------------------------------------------------
  61. When starting Elasticsearch, you will be prompted to enter the actual value
  62. like so:
  63. [source,sh]
  64. --------------------------------------------------
  65. Enter value for [node.name]:
  66. --------------------------------------------------
  67. NOTE: Elasticsearch will not start if `${prompt.text}` or `${prompt.secret}`
  68. is used in the settings and the process is run as a service or in the background.
  69. [float]
  70. === Setting default settings
  71. New default settings may be specified on the command line using the
  72. `default.` prefix. This will specify a value that will be used by
  73. default unless another value is specified in the config file.
  74. For instance, if Elasticsearch is started as follows:
  75. [source,sh]
  76. ---------------------------
  77. ./bin/elasticsearch -Edefault.node.name=My_Node
  78. ---------------------------
  79. the value for `node.name` will be `My_Node`, unless it is overwritten on the
  80. command line with `es.node.name` or in the config file with `node.name`.
  81. [float]
  82. [[logging]]
  83. == Logging configuration
  84. Elasticsearch uses http://logging.apache.org/log4j/2.x/[Log4j 2] for
  85. logging. Log4j 2 can be configured using the log4j2.properties
  86. file. Elasticsearch exposes a single property `${sys:es.logs}` that can be
  87. referenced in the configuration file to determine the location of the log files;
  88. this will resolve to a prefix for the Elasticsearch log file at runtime.
  89. For example, if your log directory (`path.logs`) is `/var/log/elasticsearch` and
  90. your cluster is named `production` then `${sys:es.logs}` will resolve to
  91. `/var/log/elasticsearch/production`.
  92. [source,properties]
  93. --------------------------------------------------
  94. appender.rolling.type = RollingFile <1>
  95. appender.rolling.name = rolling
  96. appender.rolling.fileName = ${sys:es.logs}.log <2>
  97. appender.rolling.layout.type = PatternLayout
  98. appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %.10000m%n
  99. appender.rolling.filePattern = ${sys:es.logs}-%d{yyyy-MM-dd}.log <3>
  100. appender.rolling.policies.type = Policies
  101. appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <4>
  102. appender.rolling.policies.time.interval = 1 <5>
  103. appender.rolling.policies.time.modulate = true <6>
  104. --------------------------------------------------
  105. <1> Configure the `RollingFile` appender
  106. <2> Log to `/var/log/elasticsearch/production.log`
  107. <3> Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd.log`
  108. <4> Using a time-based roll policy
  109. <5> Roll logs on a daily basis
  110. <6> Align rolls on the day boundary (as opposed to rolling every twenty-four
  111. hours)
  112. If you append `.gz` or `.zip` to `appender.rolling.filePattern`, then the logs
  113. will be compressed as they are rolled.
  114. Multiple configuration files can be loaded (in which case they will get merged)
  115. as long as they are named `log4j2.properties` and have the Elasticsearch config
  116. directory as an ancestor; this is useful for plugins that expose additional
  117. loggers. The logger section contains the java packages and their corresponding
  118. log level, where it is possible to omit the `org.elasticsearch` prefix. The
  119. appender section contains the destinations for the logs. Extensive information
  120. on how to customize logging and all the supported appenders can be found on the
  121. http://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j
  122. documentation].
  123. [float]
  124. [[deprecation-logging]]
  125. === Deprecation logging
  126. In addition to regular logging, Elasticsearch allows you to enable logging
  127. of deprecated actions. For example this allows you to determine early, if
  128. you need to migrate certain functionality in the future. By default,
  129. deprecation logging is enabled at the WARN level, the level at which all
  130. deprecation log messages will be emitted.
  131. [source,properties]
  132. --------------------------------------------------
  133. logger.deprecation.level = warn
  134. --------------------------------------------------
  135. This will create a daily rolling deprecation log file in your log directory.
  136. Check this file regularly, especially when you intend to upgrade to a new
  137. major version.
  138. The default logging configuration has set the roll policy for the deprecation
  139. logs to roll and compress after 1 GB, and to preserve a maximum of five log
  140. files (four rolled logs, and the active log).
  141. You can disable it in the `config/log4j2.properties` file by setting the deprecation
  142. log level to `info`.