In the previous article Elasticsearch 2.3 cluster with Docker, I wrote about how to deploy a cluster using Docker. In this article, I'll walk you through setting up a cluster with Docker's new swarm mode which was introduced in v1.12.

This guide will walk you through using Docker Compose to deploy an bundle. Docker Engine v1.13 offers some benefits over v1.12 when it comes to swarm mode and deploying stacks with the version 3 syntax of docker-compose.

But first a little background on swarm mode.

Swarm mode

Swarm mode is a confusingly different than the original Docker Swarm product. Swarm mode is actually baked into Docker Engine and is a separate mode of operation for Docker running on your machine. This means that Docker has released a clustering component that is natively supported by the container engine! You can use the Docker CLI to manage your cluster, meaning you don't need Kubernetes, Mesos, Nomad, or some other orchestration tool to be installed and configured.

Personally, I like this because Docker has a vested interest in building a quality tool and adding features to it in a timely manner. You get clustering functionality for free since you'll be installing Docker Engine already. If that wasn't enough for you, configuration is incredibly simple.

That said, if you're unfamiliar with the features of swarm mode, I'll point you to the key concepts page instead of rehashing documentation that already exists.

At a bare minimum you need to understand stacks and services. Here are my quick and dirty definitions:

  • stack - a group of services deployed to a cluster
  • service - a definition for how a container should run inside the cluster

So in this example, we'll build a stack that consists of an Elasticsearch service that gets deployed to multple hosts.

Getting started

To kick things off, lets convert our Docker Engine into swarm mode by running this command:

docker swarm init

The engine is now in swarm mode. Congratulations!

Actually, the first thing we want to do is understand what we're deploying. If we wanted to deploy a single node server we could run something like the Docker command here:

docker run \
  -p 9200:9200 \
  -p 9300:9300 \
  elasticsearch:5

This will launch an Elasticsearch instance with a single master node. Obviously this isn't very useful but we'll build upon it.

The next logical step is creating a docker-compose file to do this deployment for us.

version: '3'
services:
  elasticsearch:
    image: 'elasticsearch:5'
    ports:
      - '9200:9200'
      - '9300:9300'

This docker-compose file is a straight port of the docker run statement above. It's great for deploying an ES node locally for doing some testing.

To actually run this command, we'll run docker-compose up. This will show a warning message that is pertinent to our discussion:

WARNING: The Docker Engine you're using is running in swarm mode.

Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.

To deploy your application across the swarm, use `docker stack deploy`.

Despite that warning, the container is now running on your machine locally.

Up until now, we've been focused on a single host. The promise of swarm mode is that we can do clustering natively with Docker Engine and deploy to many different machines.

The cool thing is that with support added to v1.13, stack deployments now directly support docker-compose yaml files. Previously, we had to jump through some hoops to get bundles created from docker-compose yaml files. We now have a direct path for deploying to a single node (via docker-compose up) and deploying to multi-node clusters (via docker swarm mode).

Deploying a stack

The next step is actually deploying this compose file to the cluster. It's as simple as running docker stack deploy

$ docker stack deploy --compose-file docker-compose.yml test
Creating network test_default
Creating service test_elasticsearch

This command will create a new stack called test that is based on the docker-compose.yml.

You can inspect the stack with docker stack ls:

$ docker stack ls
NAME    SERVICES
test  1

As mentioned, it also created the service which can be inspected with docker service ls:

$ docker service ls
ID            NAME                  MODE        REPLICAS  IMAGE
25t3k06rbvgm  test_elasticsearch  replicated  1/1       elasticsearch:5

This service consists of a single container of Elasticsearch as indicated by the Replicas=1. The next step is configuring things to get clustering working.

Configuring clustering

To get clustering to work you will need to change things around a bit. You may also want to try this out with multiple hosts to get the full effect. Adjust you docker-compose.yml so that we execute the command with a few options.

version: '3'
services:
  elasticsearch:
    image: 'elasticsearch:5'
    ports:
      - '9200:9200'
      - '9300:9300'
    command: [ elasticsearch, -E, network.host=0.0.0.0, -E, discovery.zen.ping.unicast.hosts=elasticsearch, -E, discovery.zen.minimum_master_nodes=1 ]

In particular, you'll see that network.host, unicast.hosts and minimum_master_nodes have been added as options. The hosts is interesting because we configure it to match the name of the service. In this case we named the docker-compose service elasticsearch so we use the same host name as the unicast host.

The reason for this is that inside the swarm cluster, services are automatically a DNS record that matches the name of the service. Any container on the overlay network can talk to that service via the service name.

For us, that means we can connect to the other nodes in the cluster via the service name of elasticsearch.

So if we redeploy the stack:

docker stack deploy --compose-file docker-compose.yml test

And then scale the service to multiple nodes:

docker services scale test_elasticsearch=2

We should expect that our cluster is now connected. But you would be wrong. You see there is a known issue that prevents this from working (discussed later as it's a bit technical).

Enable DNS round robin

In order to get clustering to work, we actually have to change the service from using a VIP endpoint to using a DNSRR endpoint.

You can do this with an docker service update command. HOWEVER, prior to this, we need to remove the port configuration from the docker-compose file.

This is because ingress networking is not currently supported with DNSRR (there are a few issues tracking this currently).

Our docker-compose.yml file then becomes:

version: '3'
services:
  elasticsearch:
    image: 'elasticsearch:5'
    command: [ elasticsearch, -E, network.host=0.0.0.0, -E, discovery.zen.ping.unicast.hosts=elasticsearch, -E, discovery.zen.minimum_master_nodes=1 ]

Once the stack is redeployed, you can then update the elasticsearch service to use DNSRR:

docker service update --endpoint-mode=dnsrr test_elasticsearch

Unfortunately, --endpoint-mode is not yet supported by docker-compose syntax, so this is a manual step that must be run :-(

Providing ingress

Now astute readers may be wondering, "how do I talk to Elasticseach if the ports have been removed?" And this would be a valid question! Once DNSRR is enabled and port mappings are removed, the service is ONLY available from inside the cluster.

Fortunately for Elasticsearch, it is an HTTP based service, which means you can run a reverse proxy to allow ingress.

Once again, lets adjust the docker-compose.yml file. This time, lets add an Nginx service that reverse proxies connections on port 9200 to the elasticsearch service:

version: '3'
services:
  elasticsearch:
    image: 'elasticsearch:5'
    command: [ elasticsearch, -E, network.host=0.0.0.0, -E, discovery.zen.ping.unicast.hosts=elasticsearch, -E, discovery.zen.minimum_master_nodes=1 ]

  nginx:
    image: 'nginx:1'
    ports:
        - '9200:9200'
    command: |
      /bin/bash -c "echo '
      server {
        listen 9200;
        add_header X-Frame-Options "SAMEORIGIN";
        location / {
            proxy_pass http://elasticsearch:9200;
            proxy_http_version 1.1;
            proxy_set_header Connection keep-alive;
            proxy_set_header Upgrade $$http_upgrade;
            proxy_set_header Host $$host;
            proxy_set_header X-Real-IP $$remote_addr;
            proxy_cache_bypass $$http_upgrade;
        }
      }' | tee /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"

Redeploying the stack (and don't forget --endpoint-mode=dnsrr will allow you to hit the elasticsearch cluster from outside of containers inside the docker swarm cluster!

Persistence

The last piece of the puzzle is how to enable data persistence between stack restarts. Persistance is an interesting question. Personally, I think it makes sense to persist data to the host. Host mounted volumes enable easy backup and more configuration options.

The biggest question is that if you're saving data to the host, how do you ensure that your Elasticsearch containers get deployed to the correct hosts.

The answer to this is via constraints.

Services allow us to configure constraints on where the service will drop containers. Constraints can be physical, such as memory or CPU availability. They can also be based on labels applied to the node.

In this case, we can tag certain machines with a specific label, and then use this label as a constraint. This will ensure that only machines that have the label explicitly applied will be eligible for an Elasticsearch container.

Once again lets adjust the docker-compose.yml file.

version: '3'
services:
  elasticsearch:
    image: 'elasticsearch:5'
    command: [ elasticsearch, -E, network.host=0.0.0.0, -E, discovery.zen.ping.unicast.hosts=elasticsearch, -E, discovery.zen.minimum_master_nodes=1 ]    
    volumes:
      - /elasticsearch/data:/usr/share/elasticsearch/data
    deploy:
      placement:
        constraints: [node.labels.app_role == elasticsearch]

  nginx:
    image: 'nginx:1'
    ports:
        - '9200:9200'
    command: |
      /bin/bash -c "echo '
      server {
        listen 9200;
        add_header X-Frame-Options "SAMEORIGIN";
        location / {
            proxy_pass http://elasticsearch:9200;
            proxy_http_version 1.1;
            proxy_set_header Connection keep-alive;
            proxy_set_header Upgrade $$http_upgrade;
            proxy_set_header Host $$host;
            proxy_set_header X-Real-IP $$remote_addr;
            proxy_cache_bypass $$http_upgrade;
        }
      }' | tee /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"

We've added two new configuration items to the elasticsearch service; volumes and deploy.

The volume we mount needs to be a directory that is exists on the host machine and that the docker container has read and write permissions for. In this instance, it is configured for the default deployment location for Elasticsearch data.

The deploy section simply adds a constraint requiring that the node be tagged with app_role=elasticsearch for the container to be deployed to it.

To actually apply this label you will need to run the docker node command from the master node. Query the list of nodes:

$ docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
5xglun475a0bil6o670gnod6x *  server1   Ready   Active        Leader

Once you have the ID for the node, you can run the update command to add a label:

$ docker node update --label-add app_role=elasticsearch
5xglun475a0bil6o670gnod6x

The node has been tagged with a label now and you can redeploy your stack. The stack will now place the Elasticsearch containers on machines that have the label.

Bonus round - autoscaling

You may have noticed that when your redploy a stack it wipes out any scaling that has been applied. This can be a bit annoying. And chances are that any node you tag with a label for elasticsearch, you'll want to be included in your cluster.

For our final trick, we'll change the deployment mode of the elasticsearch service from replicated to global. This means that the service will get deployed to ALL NODES, but only nodes that match the constraints!

version: '3'
services:
  elasticsearch:
    image: 'elasticsearch:5'
    command: [ elasticsearch, -E, network.host=0.0.0.0, -E, discovery.zen.ping.unicast.hosts=elasticsearch, -E, discovery.zen.minimum_master_nodes=1 ]    
    volumes:
      - /elasticsearch/data:/usr/share/elasticsearch/data
    deploy:
      mode: 'global'
      placement:
        constraints: [node.labels.app_role == elasticsearch]

  nginx:
    image: 'nginx:1'
    ports:
        - '9200:9200'
    command: |
      /bin/bash -c "echo '
      server {
        listen 9200;
        add_header X-Frame-Options "SAMEORIGIN";
        location / {
            proxy_pass http://elasticsearch:9200;
            proxy_http_version 1.1;
            proxy_set_header Connection keep-alive;
            proxy_set_header Upgrade $$http_upgrade;
            proxy_set_header Host $$host;
            proxy_set_header X-Real-IP $$remote_addr;
            proxy_cache_bypass $$http_upgrade;
        }
      }' | tee /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"

Conclusion

Hopefully this article has helped you understand how you can create Elasticsearch resources on a single server or across multiple hosts with only a few commands. Docker has continued to add great functionality to bridge this gap and make our lives as developers easier to manage.