| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385 | [[docker]]=== Install {es} with Docker{es} is also available as Docker images.The images use https://hub.docker.com/_/centos/[centos:7] as the base image.A list of all published Docker images and tags is available athttps://www.docker.elastic.co[www.docker.elastic.co]. The source filesare inhttps://github.com/elastic/elasticsearch/blob/{branch}/distribution/docker[Github].These images are free to use under the Elastic license. They contain open sourceand free commercial features and access to paid commercial features.{stack-ov}/license-management.html[Start a 30-day trial] to try out all of thepaid commercial features. See thehttps://www.elastic.co/subscriptions[Subscriptions] page for information aboutElastic license levels.==== Pulling the imageObtaining {es} for Docker is as simple as issuing a +docker pull+ commandagainst the Elastic Docker registry.ifeval::["{release-state}"=="unreleased"]WARNING: Version {version} of {es} has not yet been released, so noDocker image is currently available for this version.endif::[]ifeval::["{release-state}"!="unreleased"][source,sh,subs="attributes"]--------------------------------------------docker pull {docker-repo}:{version}--------------------------------------------Alternatively, you can download other Docker images that contain only featuresavailable under the Apache 2.0 license. To download the images, go tohttps://www.docker.elastic.co[www.docker.elastic.co].endif::[][[docker-cli-run-dev-mode]]==== Starting a single node cluster with Dockerifeval::["{release-state}"=="unreleased"]WARNING: Version {version} of the {es} Docker image has not yet been released.endif::[]ifeval::["{release-state}"!="unreleased"]To start a single-node {es} cluster for development or testing, specify<<single-node-discovery,single-node discovery>> to bypass the <<bootstrap-checks,bootstrap checks>>:[source,sh,subs="attributes"]--------------------------------------------docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" {docker-image}--------------------------------------------endif::[][[docker-compose-file]]==== Starting a multi-node cluster with Docker ComposeTo get a three-node {es} cluster up and running in Docker,you can use Docker Compose:. Create a `docker-compose.yml` file:ifeval::["{release-state}"=="unreleased"]+--WARNING: Version {version} of {es} has not yet been released, so a`docker-compose.yml` is not available for this version.endif::[]ifeval::["{release-state}"!="unreleased"][source,yaml,subs="attributes"]--------------------------------------------include::docker-compose.yml[]--------------------------------------------endif::[]This sample Docker Compose file brings up a three-node {es} cluster.Node `es01` listens on `localhost:9200` and `es02` and `es03` talk to `es01` over a Docker network.The  https://docs.docker.com/storage/volumes[Docker named volumes]`data01`, `data02`, and `data03` store the node data directories so the data persists across restarts.If they don't already exist, `docker-compose` creates them when you bring up the cluster.--. Make sure Docker Engine is allotted at least 4GiB of memory.In Docker Desktop, you configure resource usage on the Advanced tab in Preference (macOS)or Settings (Windows).+NOTE: Docker Compose is not pre-installed with Docker on Linux.See docs.docker.com for installation instructions:https://docs.docker.com/compose/install[Install Compose on Linux]. Run `docker-compose` to bring up the cluster:+[source,sh,subs="attributes"]--------------------------------------------docker-compose up--------------------------------------------. Submit a `_cat/nodes` request to see that the nodes are up and running:+[source,sh]--------------------------------------------------curl -X GET "localhost:9200/_cat/nodes?v&pretty"--------------------------------------------------// NOTCONSOLELog messages go to the console and are handled by the configured Docker logging driver.By default you can access logs with `docker logs`.To stop the cluster, run `docker-compose down`.The data in the Docker volumes is preserved and loadedwhen you restart the cluster with `docker-compose up`.To **delete the data volumes** when you bring down the cluster,specify the `-v` option: `docker-compose down -v`.[[next-getting-started-tls-docker]]===== Start a multi-node cluster with TLS enabledSee <<configuring-tls-docker>> and{stack-gs}/get-started-docker.html#get-started-docker-tls[Run the {stack} in Docker with TLS enabled].[[docker-prod-prerequisites]]==== Using the Docker images in productionThe following requirements and recommendations apply when running {es} in Docker in production.===== Set `vm.max_map_count` to at least `262144`The `vm.max_map_count` kernel setting must be set to at least `262144` for production use.How you set `vm.max_map_count` depends on your platform:* Linux+--The `vm.max_map_count` setting should be set permanently in `/etc/sysctl.conf`:[source,sh]--------------------------------------------grep vm.max_map_count /etc/sysctl.confvm.max_map_count=262144--------------------------------------------To apply the setting on a live system, run:[source,sh]--------------------------------------------sysctl -w vm.max_map_count=262144----------------------------------------------* macOS with  https://docs.docker.com/docker-for-mac[Docker for Mac]+--The `vm.max_map_count` setting must be set within the xhyve virtual machine:. From the command line, run:+[source,sh]--------------------------------------------screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty--------------------------------------------. Press enter and use`sysctl` to configure `vm.max_map_count`:+[source,sh]--------------------------------------------sysctl -w vm.max_map_count=262144--------------------------------------------. To exit the `screen` session, type `Ctrl a d`.--* Windows and macOS with https://www.docker.com/products/docker-desktop[Docker Desktop]+--The `vm.max_map_count` setting must be set via docker-machine:[source,sh]--------------------------------------------docker-machine sshsudo sysctl -w vm.max_map_count=262144----------------------------------------------===== Configuration files must be readable by the `elasticsearch` userBy default, {es} runs inside the container as user `elasticsearch` usinguid:gid `1000:0`.IMPORTANT: One exception is https://docs.openshift.com/container-platform/3.6/creating_images/guidelines.html#openshift-specific-guidelines[Openshift],which runs containers using an arbitrarily assigned user ID.Openshift presents persistent volumes with the gid set to `0`, which works without any adjustments.If you are bind-mounting a local directory or file, it must be readable by the `elasticsearch` user.In addition, this user must have write access to the <<path-settings,data and log dirs>>.A good strategy is to grant group access to gid `0` for the local directory.For example, to prepare a local directory for storing data through a bind-mount:[source,sh]--------------------------------------------mkdir esdatadirchmod g+rwx esdatadirchgrp 0 esdatadir--------------------------------------------As a last resort, you can force the container to mutate the ownership ofany bind-mounts used for the <<path-settings,data and log dirs>> through theenvironment variable `TAKE_FILE_OWNERSHIP`. When you do this, they will be owned byuid:gid `1000:0`, which provides the required read/write access to the {es} process.===== Increase ulimits for nofile and nprocIncreased ulimits for <<setting-system-settings,nofile>> and <<max-number-threads-check,nproc>>must be available for the {es} containers.Verify the https://github.com/moby/moby/tree/ea4d1243953e6b652082305a9c3cda8656edab26/contrib/init[init system]for the Docker daemon sets them to acceptable values.To check the Docker daemon defaults for ulimits, run:[source,sh]--------------------------------------------docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'--------------------------------------------If needed, adjust them in the Daemon or override them per container.For example, when using `docker run`, set:[source,sh]----------------------------------------------ulimit nofile=65535:65535--------------------------------------------===== Disable swappingSwapping needs to be disabled for performance and node stability.For information about ways to do this, see <<setup-configuration-memory>>.If you opt for the `bootstrap.memory_lock: true` approach,you also need to define the `memlock: true` ulimit in thehttps://docs.docker.com/engine/reference/commandline/dockerd/#default-ulimits[Docker Daemon],or explicitly set for the container as shown in the  <<docker-compose-file, sample compose file>>.When using `docker run`, you can specify:  -e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1===== Randomize published portsThe image https://docs.docker.com/engine/reference/builder/#/expose[exposes]TCP ports 9200 and 9300. For production clusters, randomizing thepublished ports with `--publish-all` is recommended,unless you are pinning one container per host.===== Set the heap sizeUse the `ES_JAVA_OPTS` environment variable to set the heap size.For example, to use 16GB, specify `-e ES_JAVA_OPTS="-Xms16g -Xmx16g"` with `docker run`.IMPORTANT: You must <<heap-size,configure the heap size>> even if you arehttps://docs.docker.com/config/containers/resource_constraints/#limit-a-containers-access-to-memory[limitingmemory access] to the container.===== Pin deployments to a specific image versionPin your deployments to a specific version of the {es} Docker image. Forexample +docker.elastic.co/elasticsearch/elasticsearch:{version}+.===== Always bind data volumesYou should use a volume bound on `/usr/share/elasticsearch/data` for the following reasons:.  The data of your {es} node won't be lost if the container is killed. {es} is I/O sensitive and the Docker storage driver is not ideal for fast I/O. It allows the use of advancedhttps://docs.docker.com/engine/extend/plugins/#volume-plugins[Docker volume plugins]===== Avoid using `loop-lvm` modeIf you are using the devicemapper storage driver, do not use the default `loop-lvm` mode.Configure docker-engine to usehttps://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-docker-with-devicemapper[direct-lvm].===== Centralize your logsConsider centralizing your logs by using a differenthttps://docs.docker.com/engine/admin/logging/overview/[logging driver]. Alsonote that the default json-file logging driver is not ideally suited forproduction use.[[docker-configuration-methods]]==== Configuring {es} with DockerWhen you run in Docker, the <<config-files-location,{es} configuration files>> are loaded from`/usr/share/elasticsearch/config/`.To use custom configuration files, you <<docker-config-bind-mount, bind-mount the files>>over the configuration files in the image.You can set individual {es} configuration parameters using Docker environment variables.The <<docker-compose-file, sample compose file>> and the<<docker-cli-run-dev-mode, single-node example>> use this method.To use the contents of a file to set an environment variable, suffix the environmentvariable name with `_FILE`. This is useful for passing secrets such as passwords to {es}without specifying them directly.For example, to set the {es} bootstrap password from a file, you can bind mount thefile and set the `ELASTIC_PASSWORD_FILE` environment variable to the mount location.If you mount the password file to `/run/secrets/password.txt`, specify:[source,sh]---------------------------------------------e ELASTIC_PASSWORD_FILE=/run/secrets/bootstrapPassword.txt--------------------------------------------You can also override the default command for the image to pass {es} configurationparameters as command line options. For example:[source,sh]--------------------------------------------docker run <various parameters> bin/elasticsearch -Ecluster.name=mynewclustername--------------------------------------------While bind-mounting your configuration files is usually the preferred method in production,you can also <<_c_customized_image, create a custom Docker image>>that contains your configuration.[[docker-config-bind-mount]]===== Mounting {es} configuration filesCreate custom config files and bind-mount them over the corresponding files in the Docker image.For example, to bind-mount `custom_elasticsearch.yml` with `docker run`, specify:[source,sh]---------------------------------------------v full_path_to/custom_elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml--------------------------------------------IMPORTANT: The container **runs {es} as user `elasticsearch` usinguid:gid `1000:0`**. Bind mounted host directories and files must be accessible by this user,and the data and log directories must be writable by this user.[[_c_customized_image]]===== Using custom Docker imagesIn some environments, it might make more sense to prepare a custom image that containsyour configuration. A `Dockerfile` to achieve this might be as simple as:[source,sh,subs="attributes"]--------------------------------------------FROM docker.elastic.co/elasticsearch/elasticsearch:{version}COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/--------------------------------------------You could then build and run the image with:[source,sh]--------------------------------------------docker build --tag=elasticsearch-custom .docker run -ti -v /usr/share/elasticsearch/data elasticsearch-custom--------------------------------------------Some plugins require additional security permissions.You must explicitly accept them either by:* Attaching a `tty` when you run the Docker image and allowing the permissions when prompted.* Inspecting the security permissions and accepting them (if appropriate) by adding the `--batch` flag to the plugin install command.See {plugins}/_other_command_line_parameters.html[Plugin management]for more information.include::next-steps.asciidoc[]
 |