123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182 |
- [[settings]]
- == Configuring Elasticsearch
- Elasticsearch ships with good defaults and requires very little configuration.
- Most settings can be changed on a running cluster using the
- <<cluster-update-settings>> API.
- The configuration files should contain settings which are node-specific (such
- as `node.name` and paths), or settings which a node requires in order to be
- able to join a cluster, such as `cluster.name` and `network.host`.
- [float]
- === Config file location
- Elasticsearch has two configuration files:
- * `elasticsearch.yml` for configuring Elasticsearch, and
- * `log4j2.properties` for configuring Elasticsearch logging.
- These files are located in the config directory, whose location defaults to
- `$ES_HOME/config/`. The Debian and RPM packages set the config directory
- location to `/etc/elasticsearch/`.
- The location of the config directory can be changed with the `path.conf`
- setting, as follows:
- [source,sh]
- -------------------------------
- ./bin/elasticsearch -Epath.conf=/path/to/my/config/
- -------------------------------
- [float]
- === Config file format
- The configuration format is http://www.yaml.org/[YAML]. Here is an
- example of changing the path of the data and logs directories:
- [source,yaml]
- --------------------------------------------------
- path:
- data: /var/lib/elasticsearch
- logs: /var/log/elasticsearch
- --------------------------------------------------
- Settings can also be flattened as follows:
- [source,yaml]
- --------------------------------------------------
- path.data: /var/lib/elasticsearch
- path.logs: /var/log/elasticsearch
- --------------------------------------------------
- [float]
- === Environment variable subsitution
- Environment variables referenced with the `${...}` notation within the
- configuration file will be replaced with the value of the environment
- variable, for instance:
- [source,yaml]
- --------------------------------------------------
- node.name: ${HOSTNAME}
- network.host: ${ES_NETWORK_HOST}
- --------------------------------------------------
- [float]
- === Prompting for settings
- For settings that you do not wish to store in the configuration file, you can
- use the value `${prompt.text}` or `${prompt.secret}` and start Elasticsearch
- in the foreground. `${prompt.secret}` has echoing disabled so that the value
- entered will not be shown in your terminal; `${prompt.text}` will allow you to
- see the value as you type it in. For example:
- [source,yaml]
- --------------------------------------------------
- node:
- name: ${prompt.text}
- --------------------------------------------------
- When starting Elasticsearch, you will be prompted to enter the actual value
- like so:
- [source,sh]
- --------------------------------------------------
- Enter value for [node.name]:
- --------------------------------------------------
- NOTE: Elasticsearch will not start if `${prompt.text}` or `${prompt.secret}`
- is used in the settings and the process is run as a service or in the background.
- [float]
- === Setting default settings
- New default settings may be specified on the command line using the
- `default.` prefix. This will specify a value that will be used by
- default unless another value is specified in the config file.
- For instance, if Elasticsearch is started as follows:
- [source,sh]
- ---------------------------
- ./bin/elasticsearch -Edefault.node.name=My_Node
- ---------------------------
- the value for `node.name` will be `My_Node`, unless it is overwritten on the
- command line with `es.node.name` or in the config file with `node.name`.
- [float]
- [[logging]]
- == Logging configuration
- Elasticsearch uses http://logging.apache.org/log4j/2.x/[Log4j 2] for
- logging. Log4j 2 can be configured using the log4j2.properties
- file. Elasticsearch exposes a single property `${sys:es.logs}` that can be
- referenced in the configuration file to determine the location of the log files;
- this will resolve to a prefix for the Elasticsearch log file at runtime.
- For example, if your log directory (`path.logs`) is `/var/log/elasticsearch` and
- your cluster is named `production` then `${sys:es.logs}` will resolve to
- `/var/log/elasticsearch/production`.
- [source,properties]
- --------------------------------------------------
- appender.rolling.type = RollingFile <1>
- appender.rolling.name = rolling
- appender.rolling.fileName = ${sys:es.logs}.log <2>
- appender.rolling.layout.type = PatternLayout
- appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %.10000m%n
- appender.rolling.filePattern = ${sys:es.logs}-%d{yyyy-MM-dd}.log <3>
- appender.rolling.policies.type = Policies
- appender.rolling.policies.time.type = TimeBasedTriggeringPolicy <4>
- appender.rolling.policies.time.interval = 1 <5>
- appender.rolling.policies.time.modulate = true <6>
- --------------------------------------------------
- <1> Configure the `RollingFile` appender
- <2> Log to `/var/log/elasticsearch/production.log`
- <3> Roll logs to `/var/log/elasticsearch/production-yyyy-MM-dd.log`
- <4> Using a time-based roll policy
- <5> Roll logs on a daily basis
- <6> Align rolls on the day boundary (as opposed to rolling every twenty-four
- hours)
- If you append `.gz` or `.zip` to `appender.rolling.filePattern`, then the logs
- will be compressed as they are rolled.
- Multiple configuration files can be loaded (in which case they will get merged)
- as long as they are named `log4j2.properties` and have the Elasticsearch config
- directory as an ancestor; this is useful for plugins that expose additional
- loggers. The logger section contains the java packages and their corresponding
- log level, where it is possible to omit the `org.elasticsearch` prefix. The
- appender section contains the destinations for the logs. Extensive information
- on how to customize logging and all the supported appenders can be found on the
- http://logging.apache.org/log4j/2.x/manual/configuration.html[Log4j
- documentation].
- [float]
- [[deprecation-logging]]
- === Deprecation logging
- In addition to regular logging, Elasticsearch allows you to enable logging
- of deprecated actions. For example this allows you to determine early, if
- you need to migrate certain functionality in the future. By default,
- deprecation logging is enabled at the WARN level, the level at which all
- deprecation log messages will be emitted.
- [source,properties]
- --------------------------------------------------
- logger.deprecation.level = warn
- --------------------------------------------------
- This will create a daily rolling deprecation log file in your log directory.
- Check this file regularly, especially when you intend to upgrade to a new
- major version.
- The default logging configuration has set the roll policy for the deprecation
- logs to roll and compress after 1 GB, and to preserve a maximum of five log
- files (four rolled logs, and the active log).
- You can disable it in the `config/log4j2.properties` file by setting the deprecation
- log level to `info`.
|