recently I spend a big chunk of my time in office doing DevOps related tasks. Elasticsearch is one of the technologies that we use a lot for both production and development.
ever since I had to deal with elasticsearch instances in the office I decided to move them to docker containers, that would make my life easier as we have multiple clusters running.
there are many ELK stack docker images out there, but I found elk-docker be the closest to what I’m looking for. The image documentation is pretty good to get you started but I decided to create a docker-compose file that makes more sense for a fast more production ready box.
DISCLAIMER: to put up elasticsearch on production (with or without docker) there are many other considerations that need to be done. this article simply aims to give you some ideas on how to have a easier to maintain, configurable and data persistent container in a short time. most of the articles out there simply show you how fast you can use docker to boot up an ELK instance without any plan on persisting data, changing configurations in future and scaling.
the following configuration is the bare that I always use to build my elasticsearch machines. it uses the same elk-docker image I linked above.
elk:
container_name: elk
image: sebp/elk
ports:
- "5601:5601"
- "9200:9200"
# enable for logstash
#- "5044:5044"
- "9300:9300"
restart: unless-stopped
environment:
- ES_HEAP_SIZE=1g
- TZ=Etc/UTC
- ES_JAVA_OPTS=""
- ES_CONNECT_RETRY=300
volumes:
- "./configurations/elasticsearch.yml:/etc/elasticsearch/elasticsearch.yml"
- "./configurations/kibana.yml:/opt/kibana/kibana.yml"
- "./elasticsearch:/var/lib/elasticsearch"
So basically what I’ve done here is to put all my usual necessary environment variables in the compose file. make sure you set up a proper ES_HEAP_SIZE based on your box spec.
the ES_CONNECT_RETRY is something that often creates issues if it is too low, as the image comes with a default of 30 seconds, if you have a large amount of data on your container it can cause start issues.
the restart policy ensures that docker restarts the container if it crashes.
restart: unless-stopped
in my ES instances I mostly change the configurations on elasticsearch.yml & kibana.yml so I made sure both of these are in the configuration folder next to the docker-compose.yml
the image stores data in a permanent volume by default, but of course that’d be in the docker volume directory with some weird name, I do not want that so I pointed the elasticsearch data directory to a folder called elasticsearch right next to the docker-compose.yml file.
this almost is all the configurations that I do within the compose file. make sure you edit the elasticsearch.yml file and put your ip at network.publish_host. this is good enough for a single node cluster or a master node. optionally you can disable the data on your master node too.
network.publish_host: 172.20.100.129
if you wish to run elasticsearch as cluster simply use the same docker-compose file and if you want remove the exposed ports for kibana(5601) and logstash (5044).
for slave nodes you will need to uncomment discovery.zen.ping.unicast.hosts variable and place your master node published ip address there.
discovery.zen.ping.unicast.hosts: ["your.masternode.ip.address"]
I uploaded the this compose file together with elasticsearch.yml & kibana.yml in a github repository, feel free to get started by cloning that repo.
Incoming search terms:
- docker elk stack
- docker-conpose prod elk stack
- elastic stack in docker prpd
- elk docker
- elk docker production
- elk stack docker production