|
@@ -30,7 +30,7 @@ endif::[]
|
|
|
|
|
|
ifeval::["{release-state}"!="unreleased"]
|
|
|
|
|
|
-["source","sh",subs="attributes"]
|
|
|
+[source,sh,subs="attributes"]
|
|
|
--------------------------------------------
|
|
|
docker pull {docker-repo}:{version}
|
|
|
--------------------------------------------
|
|
@@ -41,11 +41,8 @@ https://www.docker.elastic.co[www.docker.elastic.co].
|
|
|
|
|
|
endif::[]
|
|
|
|
|
|
-[[docker-cli-run]]
|
|
|
-==== Running {es} from the command line
|
|
|
-
|
|
|
[[docker-cli-run-dev-mode]]
|
|
|
-===== Development mode
|
|
|
+==== Starting a single node cluster with Docker
|
|
|
|
|
|
ifeval::["{release-state}"=="unreleased"]
|
|
|
|
|
@@ -55,27 +52,94 @@ endif::[]
|
|
|
|
|
|
ifeval::["{release-state}"!="unreleased"]
|
|
|
|
|
|
-{es} can be quickly started for development or testing use with the following command:
|
|
|
+To start a single-node {es} cluster for development or testing, specify
|
|
|
+<<single-node-discovery,single-node discovery>> to bypass the <<bootstrap-checks,bootstrap checks>>:
|
|
|
|
|
|
-["source","sh",subs="attributes"]
|
|
|
+[source,sh,subs="attributes"]
|
|
|
--------------------------------------------
|
|
|
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" {docker-image}
|
|
|
--------------------------------------------
|
|
|
|
|
|
-Note the use of <<single-node-discovery,single-node discovery>> that allows bypassing
|
|
|
-the <<bootstrap-checks,bootstrap checks>> in a single-node development cluster.
|
|
|
+endif::[]
|
|
|
+
|
|
|
+[[docker-compose-file]]
|
|
|
+==== Starting a multi-node cluster with Docker Compose
|
|
|
+
|
|
|
+To get a three-node {es} cluster up and running in Docker,
|
|
|
+you can use Docker Compose:
|
|
|
+
|
|
|
+. Create a `docker-compose.yml` file:
|
|
|
+ifeval::["{release-state}"=="unreleased"]
|
|
|
++
|
|
|
+--
|
|
|
+WARNING: Version {version} of {es} has not yet been released, so a
|
|
|
+`docker-compose.yml` is not available for this version.
|
|
|
+
|
|
|
+endif::[]
|
|
|
|
|
|
+ifeval::["{release-state}"!="unreleased"]
|
|
|
+[source,yaml,subs="attributes"]
|
|
|
+--------------------------------------------
|
|
|
+include::docker-compose.yml[]
|
|
|
+--------------------------------------------
|
|
|
endif::[]
|
|
|
|
|
|
-[[docker-cli-run-prod-mode]]
|
|
|
-===== Production mode
|
|
|
+This sample Docker Compose file brings up a three-node {es} cluster.
|
|
|
+Node `es01` listens on `localhost:9200` and `es02` and `es03` talk to `es01` over a Docker network.
|
|
|
+
|
|
|
+The https://docs.docker.com/storage/volumes[Docker named volumes]
|
|
|
+`data01`, `data02`, and `data03` store the node data directories so the data persists across restarts.
|
|
|
+If they don't already exist, `docker-compose` creates them when you bring up the cluster.
|
|
|
+--
|
|
|
+. Make sure Docker Engine is allotted at least 4GiB of memory.
|
|
|
+In Docker Desktop, you configure resource usage on the Advanced tab in Preference (macOS)
|
|
|
+or Settings (Windows).
|
|
|
++
|
|
|
+NOTE: Docker Compose is not pre-installed with Docker on Linux.
|
|
|
+See docs.docker.com for installation instructions:
|
|
|
+https://docs.docker.com/compose/install[Install Compose on Linux]
|
|
|
+
|
|
|
+. Run `docker-compose` to bring up the cluster:
|
|
|
++
|
|
|
+[source,sh,subs="attributes"]
|
|
|
+--------------------------------------------
|
|
|
+docker-compose up
|
|
|
+--------------------------------------------
|
|
|
+
|
|
|
+. Submit a `_cat/nodes` request to see that the nodes are up and running:
|
|
|
++
|
|
|
+[source,sh]
|
|
|
+--------------------------------------------------
|
|
|
+curl -X GET "localhost:9200/_cat/nodes?v&pretty"
|
|
|
+--------------------------------------------------
|
|
|
+// NOTCONSOLE
|
|
|
+
|
|
|
+Log messages go to the console and are handled by the configured Docker logging driver.
|
|
|
+By default you can access logs with `docker logs`.
|
|
|
+
|
|
|
+To stop the cluster, run `docker-compose down`.
|
|
|
+The data in the Docker volumes is preserved and loaded
|
|
|
+when when you restart the cluster with `docker-compose up`.
|
|
|
+To **delete the data volumes** when you bring down the cluster,
|
|
|
+specify the `-v` option: `docker-compose down -v`.
|
|
|
+
|
|
|
+
|
|
|
+[[next-getting-started-tls-docker]]
|
|
|
+===== Start a multi-node cluster with TLS enabled
|
|
|
+
|
|
|
+See <<configuring-tls-docker>> and
|
|
|
+{stack-gs}/get-started-docker.html#get-started-docker-tls[Run the {stack} in Docker with TLS enabled].
|
|
|
|
|
|
[[docker-prod-prerequisites]]
|
|
|
-[IMPORTANT]
|
|
|
-=========================
|
|
|
+==== Using the Docker images in production
|
|
|
+
|
|
|
+The following requirements and recommendations apply when running {es} in Docker in production.
|
|
|
+
|
|
|
+===== Set `vm.max_map_count` to at least `262144`
|
|
|
+
|
|
|
+The `vm.max_map_count` kernel setting must be set to at least `262144` for production use.
|
|
|
|
|
|
-The `vm.max_map_count` kernel setting needs to be set to at least `262144` for
|
|
|
-production use. Depending on your platform:
|
|
|
+How you set `vm.max_map_count` depends on your platform:
|
|
|
|
|
|
* Linux
|
|
|
+
|
|
@@ -83,330 +147,226 @@ production use. Depending on your platform:
|
|
|
The `vm.max_map_count` setting should be set permanently in `/etc/sysctl.conf`:
|
|
|
[source,sh]
|
|
|
--------------------------------------------
|
|
|
-$ grep vm.max_map_count /etc/sysctl.conf
|
|
|
+grep vm.max_map_count /etc/sysctl.conf
|
|
|
vm.max_map_count=262144
|
|
|
--------------------------------------------
|
|
|
|
|
|
-To apply the setting on a live system type: `sysctl -w vm.max_map_count=262144`
|
|
|
+To apply the setting on a live system, run:
|
|
|
+
|
|
|
+[source,sh]
|
|
|
+--------------------------------------------
|
|
|
+sysctl -w vm.max_map_count=262144
|
|
|
+--------------------------------------------
|
|
|
--
|
|
|
|
|
|
-* macOS with https://docs.docker.com/engine/installation/mac/#/docker-for-mac[Docker for Mac]
|
|
|
+* macOS with https://docs.docker.com/docker-for-mac[Docker for Mac]
|
|
|
+
|
|
|
--
|
|
|
The `vm.max_map_count` setting must be set within the xhyve virtual machine:
|
|
|
|
|
|
-["source","sh"]
|
|
|
+. From the command line, run:
|
|
|
++
|
|
|
+[source,sh]
|
|
|
--------------------------------------------
|
|
|
-$ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
|
|
|
+screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
|
|
|
--------------------------------------------
|
|
|
|
|
|
-Just press enter and configure the `sysctl` setting as you would for Linux:
|
|
|
-
|
|
|
-["source","sh"]
|
|
|
+. Press enter and use`sysctl` to configure `vm.max_map_count`:
|
|
|
++
|
|
|
+[source,sh]
|
|
|
--------------------------------------------
|
|
|
sysctl -w vm.max_map_count=262144
|
|
|
--------------------------------------------
|
|
|
+
|
|
|
+. To exit the `screen` session, type `Ctrl a d`.
|
|
|
--
|
|
|
|
|
|
-* Windows and macOS with https://www.docker.com/products/docker-toolbox[Docker Toolbox]
|
|
|
+* Windows and macOS with https://www.docker.com/products/docker-desktop[Docker Desktop]
|
|
|
+
|
|
|
--
|
|
|
The `vm.max_map_count` setting must be set via docker-machine:
|
|
|
|
|
|
-["source","txt"]
|
|
|
+[source,sh]
|
|
|
--------------------------------------------
|
|
|
docker-machine ssh
|
|
|
sudo sysctl -w vm.max_map_count=262144
|
|
|
--------------------------------------------
|
|
|
--
|
|
|
-=========================
|
|
|
|
|
|
-The following example brings up a cluster comprising two {es} nodes.
|
|
|
-To bring up the cluster, use the
|
|
|
-<<docker-prod-cluster-composefile,`docker-compose.yml`>> and just type:
|
|
|
+===== Configuration files must be readable by the `elasticsearch` user
|
|
|
|
|
|
-ifeval::["{release-state}"=="unreleased"]
|
|
|
+By default, {es} runs inside the container as user `elasticsearch` using
|
|
|
+uid:gid `1000:1000`.
|
|
|
|
|
|
-WARNING: Version {version} of {es} has not yet been released, so a
|
|
|
-`docker-compose.yml` is not available for this version.
|
|
|
+IMPORTANT: One exception is https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.html#openshift-specific-guidelines[Openshift],
|
|
|
+which runs containers using an arbitrarily assigned user ID.
|
|
|
+Openshift presents persistent volumes with the gid set to `0`, which works without any adjustments.
|
|
|
|
|
|
-endif::[]
|
|
|
+If you are bind-mounting a local directory or file, it must be readable by the `elasticsearch` user.
|
|
|
+In addition, this user must have write access to the <<path-settings,data and log dirs>>.
|
|
|
+A good strategy is to grant group access to gid `1000` or `0` for the local directory.
|
|
|
|
|
|
-ifeval::["{release-state}"!="unreleased"]
|
|
|
+For example, to prepare a local directory for storing data through a bind-mount:
|
|
|
|
|
|
-["source","sh"]
|
|
|
+[source,sh]
|
|
|
--------------------------------------------
|
|
|
-docker-compose up
|
|
|
+mkdir esdatadir
|
|
|
+chmod g+rwx esdatadir
|
|
|
+chgrp 1000 esdatadir
|
|
|
--------------------------------------------
|
|
|
|
|
|
-endif::[]
|
|
|
+As a last resort, you can force the container to mutate the ownership of
|
|
|
+any bind-mounts used for the <<path-settings,data and log dirs>> through the
|
|
|
+environment variable `TAKE_FILE_OWNERSHIP`. When you do this, they will be owned by
|
|
|
+uid:gid `1000:0`, which provides the required read/write access to the {es} process.
|
|
|
|
|
|
-[NOTE]
|
|
|
-`docker-compose` is not pre-installed with Docker on Linux.
|
|
|
-Instructions for installing it can be found on the
|
|
|
-https://docs.docker.com/compose/install/#install-using-pip[Docker Compose webpage].
|
|
|
|
|
|
-The node `es01` listens on `localhost:9200` while `es02`
|
|
|
-talks to `es01` over a Docker network.
|
|
|
+===== Increase ulimits for nofile and nproc
|
|
|
|
|
|
-This example also uses
|
|
|
-https://docs.docker.com/engine/tutorials/dockervolumes[Docker named volumes],
|
|
|
-called `esdata01` and `esdata02` which will be created if not already present.
|
|
|
+Increased ulimits for <<setting-system-settings,nofile>> and <<max-number-threads-check,nproc>>
|
|
|
+must be available for the {es} containers.
|
|
|
+Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system]
|
|
|
+for the Docker daemon sets them to acceptable values.
|
|
|
|
|
|
-[[docker-prod-cluster-composefile]]
|
|
|
-`docker-compose.yml`:
|
|
|
-ifeval::["{release-state}"=="unreleased"]
|
|
|
+To check the Docker daemon defaults for ulimits, run:
|
|
|
|
|
|
-WARNING: Version {version} of {es} has not yet been released, so a
|
|
|
-`docker-compose.yml` is not available for this version.
|
|
|
+[source,sh]
|
|
|
+--------------------------------------------
|
|
|
+docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
|
|
|
+--------------------------------------------
|
|
|
|
|
|
-endif::[]
|
|
|
+If needed, adjust them in the Daemon or override them per container.
|
|
|
+For example, when using `docker run`, set:
|
|
|
|
|
|
-ifeval::["{release-state}"!="unreleased"]
|
|
|
-["source","yaml",subs="attributes"]
|
|
|
---------------------------------------------
|
|
|
-version: '2.2'
|
|
|
-services:
|
|
|
- es01:
|
|
|
- image: {docker-image}
|
|
|
- container_name: es01
|
|
|
- environment:
|
|
|
- - node.name=es01
|
|
|
- - discovery.seed_hosts=es02
|
|
|
- - cluster.initial_master_nodes=es01,es02
|
|
|
- - cluster.name=docker-cluster
|
|
|
- - bootstrap.memory_lock=true
|
|
|
- - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
|
|
- ulimits:
|
|
|
- memlock:
|
|
|
- soft: -1
|
|
|
- hard: -1
|
|
|
- nofile:
|
|
|
- soft: 65536
|
|
|
- hard: 65536
|
|
|
- volumes:
|
|
|
- - esdata01:/usr/share/elasticsearch/data
|
|
|
- ports:
|
|
|
- - 9200:9200
|
|
|
- networks:
|
|
|
- - esnet
|
|
|
- es02:
|
|
|
- image: {docker-image}
|
|
|
- container_name: es02
|
|
|
- environment:
|
|
|
- - node.name=es02
|
|
|
- - discovery.seed_hosts=es01
|
|
|
- - cluster.initial_master_nodes=es01,es02
|
|
|
- - cluster.name=docker-cluster
|
|
|
- - bootstrap.memory_lock=true
|
|
|
- - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
|
|
- ulimits:
|
|
|
- memlock:
|
|
|
- soft: -1
|
|
|
- hard: -1
|
|
|
- nofile:
|
|
|
- soft: 65536
|
|
|
- hard: 65536
|
|
|
- volumes:
|
|
|
- - esdata02:/usr/share/elasticsearch/data
|
|
|
- networks:
|
|
|
- - esnet
|
|
|
-
|
|
|
-volumes:
|
|
|
- esdata01:
|
|
|
- driver: local
|
|
|
- esdata02:
|
|
|
- driver: local
|
|
|
-
|
|
|
-networks:
|
|
|
- esnet:
|
|
|
+[source,sh]
|
|
|
+--------------------------------------------
|
|
|
+--ulimit nofile=65535:65535
|
|
|
--------------------------------------------
|
|
|
-endif::[]
|
|
|
|
|
|
-To stop the cluster, type `docker-compose down`. Data volumes will persist,
|
|
|
-so it's possible to start the cluster again with the same data using
|
|
|
-`docker-compose up`.
|
|
|
-To destroy the cluster **and the data volumes**, just type
|
|
|
-`docker-compose down -v`.
|
|
|
+===== Disable swapping
|
|
|
|
|
|
-===== Inspect status of cluster:
|
|
|
+Swapping needs to be disabled for performance and node stability.
|
|
|
+For information about ways to do this, see <<setup-configuration-memory>>.
|
|
|
|
|
|
-["source","txt"]
|
|
|
---------------------------------------------
|
|
|
-curl http://127.0.0.1:9200/_cat/health
|
|
|
-1472225929 15:38:49 docker-cluster green 2 2 4 2 0 0 0 0 - 100.0%
|
|
|
---------------------------------------------
|
|
|
-// NOTCONSOLE
|
|
|
+If you opt for the `bootstrap.memory_lock: true` approach,
|
|
|
+you also need to define the `memlock: true` ulimit in the
|
|
|
+https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker Daemon],
|
|
|
+or explicitly set for the container as shown in the <<docker-compose-file, sample compose file>>.
|
|
|
+When using `docker run`, you can specify:
|
|
|
|
|
|
-Log messages go to the console and are handled by the configured Docker logging
|
|
|
-driver. By default you can access logs with `docker logs`.
|
|
|
+ -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
|
|
|
|
|
|
-[[docker-configuration-methods]]
|
|
|
-==== Configuring {es} with Docker
|
|
|
+===== Randomize published ports
|
|
|
|
|
|
-{es} loads its configuration from files under `/usr/share/elasticsearch/config/`.
|
|
|
-These configuration files are documented in <<settings>> and <<jvm-options>>.
|
|
|
+The image https://docs.docker.com/engine/reference/builder/#/expose[exposes]
|
|
|
+TCP ports 9200 and 9300. For production clusters, randomizing the
|
|
|
+published ports with `--publish-all` is recommended,
|
|
|
+unless you are pinning one container per host.
|
|
|
|
|
|
-The image offers several methods for configuring {es} settings with the
|
|
|
-conventional approach being to provide customized files, that is to say
|
|
|
-`elasticsearch.yml`, but it's also possible to use environment variables to set
|
|
|
-options:
|
|
|
+===== Set the heap size
|
|
|
|
|
|
-===== A. Present the parameters via Docker environment variables
|
|
|
-For example, to define the cluster name with `docker run` you can pass
|
|
|
-`-e "cluster.name=mynewclustername"`. Double quotes are required.
|
|
|
+Use the `ES_JAVA_OPTS` environment variable to set the heap size.
|
|
|
+For example, to use 16GB, specify `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`.
|
|
|
|
|
|
-===== B. Bind-mounted configuration
|
|
|
-Create your custom config file and mount this over the image's corresponding file.
|
|
|
-For example, bind-mounting a `custom_elasticsearch.yml` with `docker run` can be
|
|
|
-accomplished with the parameter:
|
|
|
+IMPORTANT: You must <<heap-size,configure the heap size>> even if you are
|
|
|
+https://docs.docker.com/config/containers/resource_constraints/#limit-a-containers-access-to-memory[limiting
|
|
|
+memory access] to the container.
|
|
|
|
|
|
-["source","sh"]
|
|
|
---------------------------------------------
|
|
|
--v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
|
|
|
---------------------------------------------
|
|
|
-IMPORTANT: The container **runs {es} as user `elasticsearch` using
|
|
|
-uid:gid `1000:1000`**. Bind mounted host directories and files, such as
|
|
|
-`custom_elasticsearch.yml` above, **need to be accessible by this user**. For the <<path-settings, data and log dirs>>,
|
|
|
-such as `/usr/share/elasticsearch/data`, write access is required as well.
|
|
|
-Also see note 1 below.
|
|
|
+===== Pin deployments to a specific image version
|
|
|
|
|
|
-===== C. Customized image
|
|
|
-In some environments, it may make more sense to prepare a custom image containing
|
|
|
-your configuration. A `Dockerfile` to achieve this may be as simple as:
|
|
|
+Pin your deployments to a specific version of the {es} Docker image. For
|
|
|
+example +docker.elastic.co/elasticsearch/elasticsearch:{version}+.
|
|
|
|
|
|
-["source","sh",subs="attributes"]
|
|
|
---------------------------------------------
|
|
|
-FROM docker.elastic.co/elasticsearch/elasticsearch:{version}
|
|
|
-COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
|
|
|
---------------------------------------------
|
|
|
+===== Always bind data volumes
|
|
|
|
|
|
-You could then build and try the image with something like:
|
|
|
+You should use a volume bound on `/usr/share/elasticsearch/data` for the following reasons:
|
|
|
|
|
|
-["source","sh"]
|
|
|
---------------------------------------------
|
|
|
-docker build --tag=elasticsearch-custom .
|
|
|
-docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
|
|
|
---------------------------------------------
|
|
|
+. The data of your {es} node won't be lost if the container is killed
|
|
|
|
|
|
-Some plugins require additional security permissions. You have to explicitly accept
|
|
|
-them either by attaching a `tty` when you run the Docker image and accepting yes at
|
|
|
-the prompts, or inspecting the security permissions separately and if you are
|
|
|
-comfortable with them adding the `--batch` flag to the plugin install command.
|
|
|
-See {plugins}/_other_command_line_parameters.html[Plugin Management documentation]
|
|
|
-for more details.
|
|
|
+. {es} is I/O sensitive and the Docker storage driver is not ideal for fast I/O
|
|
|
|
|
|
-[[override-image-default]]
|
|
|
-===== D. Override the image's default https://docs.docker.com/engine/reference/run/#cmd-default-command-or-options[CMD]
|
|
|
+. It allows the use of advanced
|
|
|
+https://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins]
|
|
|
|
|
|
-Options can be passed as command-line options to the {es} process by
|
|
|
-overriding the default command for the image. For example:
|
|
|
+===== Avoid using `loop-lvm` mode
|
|
|
|
|
|
-["source","sh"]
|
|
|
---------------------------------------------
|
|
|
-docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclustername
|
|
|
---------------------------------------------
|
|
|
+If you are using the devicemapper storage driver, do not use the default `loop-lvm` mode.
|
|
|
+Configure docker-engine to use
|
|
|
+https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm].
|
|
|
|
|
|
-[[next-getting-started-tls-docker]]
|
|
|
-==== Configuring SSL/TLS with the {es} Docker image
|
|
|
+===== Centralize your logs
|
|
|
|
|
|
-See <<configuring-tls-docker>>.
|
|
|
+Consider centralizing your logs by using a different
|
|
|
+https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also
|
|
|
+note that the default json-file logging driver is not ideally suited for
|
|
|
+production use.
|
|
|
|
|
|
-==== Notes for production use and defaults
|
|
|
+[[docker-configuration-methods]]
|
|
|
+==== Configuring {es} with Docker
|
|
|
|
|
|
-We have collected a number of best practices for production use.
|
|
|
-Any Docker parameters mentioned below assume the use of `docker run`.
|
|
|
+When you run in Docker, the <<config-files-location,{es} configuration files>> are loaded from
|
|
|
+`/usr/share/elasticsearch/config/`.
|
|
|
|
|
|
-. By default, {es} runs inside the container as user `elasticsearch` using
|
|
|
-uid:gid `1000:1000`.
|
|
|
-+
|
|
|
---
|
|
|
-CAUTION: One exception is https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.html#openshift-specific-guidelines[Openshift],
|
|
|
-which runs containers using an arbitrarily assigned user ID. Openshift will
|
|
|
-present persistent volumes with the gid set to `0` which will work without any
|
|
|
-adjustments.
|
|
|
-
|
|
|
-If you are bind-mounting a local directory or file, ensure it is readable by
|
|
|
-this user, while the <<path-settings,data and log dirs>> additionally require
|
|
|
-write access. A good strategy is to grant group access to gid `1000` or `0` for
|
|
|
-the local directory. As an example, to prepare a local directory for storing
|
|
|
-data through a bind-mount:
|
|
|
-
|
|
|
- mkdir esdatadir
|
|
|
- chmod g+rwx esdatadir
|
|
|
- chgrp 1000 esdatadir
|
|
|
-
|
|
|
-As a last resort, you can also force the container to mutate the ownership of
|
|
|
-any bind-mounts used for the <<path-settings,data and log dirs>> through the
|
|
|
-environment variable `TAKE_FILE_OWNERSHIP`. In this case, they will be owned by
|
|
|
-uid:gid `1000:0` providing read/write access to the {es} process as required.
|
|
|
---
|
|
|
+To use custom configuration files, you <<docker-config-bind-mount, bind-mount the files>>
|
|
|
+over the configuration files in the image.
|
|
|
|
|
|
-. It is important to ensure increased ulimits for
|
|
|
-<<setting-system-settings,nofile>> and <<max-number-threads-check,nproc>> are
|
|
|
-available for the {es} containers. Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system]
|
|
|
-for the Docker daemon is already setting those to acceptable values and, if
|
|
|
-needed, adjust them in the Daemon, or override them per container, for example
|
|
|
-using `docker run`:
|
|
|
-+
|
|
|
---
|
|
|
- --ulimit nofile=65535:65535
|
|
|
+You can set individual {es} configuration parameters using Docker environment variables.
|
|
|
+The <<docker-compose-file, sample compose file>> and the
|
|
|
+<<docker-cli-run-dev-mode, single-node example>> use this method.
|
|
|
|
|
|
-NOTE: One way of checking the Docker daemon defaults for the aforementioned
|
|
|
-ulimits is by running:
|
|
|
+You can also override the default command for the image to pass {es} configuration
|
|
|
+parameters as command line options. For example:
|
|
|
|
|
|
- docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
|
|
|
---
|
|
|
+[source,sh]
|
|
|
+--------------------------------------------
|
|
|
+docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclustername
|
|
|
+--------------------------------------------
|
|
|
|
|
|
-. Swapping needs to be disabled for performance and node stability. This can be
|
|
|
-achieved through any of the methods mentioned in the
|
|
|
-<<setup-configuration-memory,{es} docs>>. If you opt for the
|
|
|
-`bootstrap.memory_lock: true` approach, apart from defining it through any of
|
|
|
-the <<docker-configuration-methods,configuration methods>>, you will
|
|
|
-additionally need the `memlock: true` ulimit, either defined in the
|
|
|
-https://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker Daemon]
|
|
|
-or specifically set for the container. This is demonstrated above in the
|
|
|
-<<docker-prod-cluster-composefile,docker-compose.yml>>. If using `docker run`:
|
|
|
-+
|
|
|
---
|
|
|
- -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1
|
|
|
---
|
|
|
+While bind-mounting your configuration files is usually the preferred method in production,
|
|
|
+you can also <<docker-config-custom-image, create a custom Docker image>>
|
|
|
+that contains your configuration.
|
|
|
|
|
|
-. The image https://docs.docker.com/engine/reference/builder/#/expose[exposes]
|
|
|
-TCP ports 9200 and 9300. For clusters it is recommended to randomize the
|
|
|
-published ports with `--publish-all`, unless you are pinning one container per host.
|
|
|
+[[docker-config-bind-mount]]
|
|
|
+===== Mounting {es} configuration files
|
|
|
|
|
|
-. Use the `ES_JAVA_OPTS` environment variable to set heap size. For example, to
|
|
|
-use 16GB, use `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`.
|
|
|
-+
|
|
|
---
|
|
|
-NOTE: You still need to <<heap-size,configure the heap size>> even if you are
|
|
|
-https://docs.docker.com/config/containers/resource_constraints/#limit-a-containers-access-to-memory[limiting
|
|
|
-memory access] to the container.
|
|
|
---
|
|
|
+Create custom config files and bind-mount them over the corresponding files in the Docker image.
|
|
|
+For example, to bind-mount `custom_elasticsearch.yml` with `docker run`, specify:
|
|
|
|
|
|
-. Pin your deployments to a specific version of the {es} Docker image, for
|
|
|
-example +docker.elastic.co/elasticsearch/elasticsearch:{version}+.
|
|
|
+[source,sh]
|
|
|
+--------------------------------------------
|
|
|
+-v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
|
|
|
+--------------------------------------------
|
|
|
|
|
|
-. Always use a volume bound on `/usr/share/elasticsearch/data`, as shown in the
|
|
|
-<<docker-cli-run-prod-mode,production example>>, for the following reasons:
|
|
|
+IMPORTANT: The container **runs {es} as user `elasticsearch` using
|
|
|
+**uid:gid `1000:1000`**. Bind mounted host directories and files must be accessible by this user,
|
|
|
+and the data and log directories must be writable by this user.
|
|
|
|
|
|
-.. The data of your {es} node won't be lost if the container is killed
|
|
|
+[[docker-config-custom-image]]
|
|
|
+===== Using custom Docker images
|
|
|
+In some environments, it might make more sense to prepare a custom image that contains
|
|
|
+your configuration. A `Dockerfile` to achieve this might be as simple as:
|
|
|
|
|
|
-.. {es} is I/O sensitive and the Docker storage driver is not ideal for fast I/O
|
|
|
+[source,sh,subs="attributes"]
|
|
|
+--------------------------------------------
|
|
|
+FROM docker.elastic.co/elasticsearch/elasticsearch:{version}
|
|
|
+COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
|
|
|
+--------------------------------------------
|
|
|
|
|
|
-.. It allows the use of advanced
|
|
|
-https://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins]
|
|
|
+You could then build and run the image with:
|
|
|
|
|
|
-. If you are using the devicemapper storage driver, make sure you are not using
|
|
|
-the default `loop-lvm` mode. Configure docker-engine to use
|
|
|
-https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm]
|
|
|
-instead.
|
|
|
+[source,sh]
|
|
|
+--------------------------------------------
|
|
|
+docker build --tag=elasticsearch-custom .
|
|
|
+docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom
|
|
|
+--------------------------------------------
|
|
|
|
|
|
-. Consider centralizing your logs by using a different
|
|
|
-https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also
|
|
|
-note that the default json-file logging driver is not ideally suited for
|
|
|
-production use.
|
|
|
+Some plugins require additional security permissions.
|
|
|
+You must explicitly accept them either by:
|
|
|
+
|
|
|
+* Attaching a `tty` when you run the Docker image and allowing the permissions when prompted.
|
|
|
+* Inspecting the security permissions and accepting them (if appropriate) by adding the `--batch` flag to the plugin install command.
|
|
|
|
|
|
+See {plugins}/_other_command_line_parameters.html[Plugin management]
|
|
|
+for more information.
|
|
|
|
|
|
include::next-steps.asciidoc[]
|