A while ago I published a blog post about Dockerizing my php development environment, soon after that I learned about the poor i/o performance of docker-containers on MacOSX.
While that is manageable for small projects where there are not much I/O happening between multiple containers, I was stunned by how bad it could get after starting to work with my new company.
At the new company we are using minimum 5 container to bring up the instance of the app which I work on, a big old Symfony code with hundreds of tests, migrations and packages.
Starting up the containers with Docker-compose would give me with below results:
- running tests would take more than 1 hours.
- running composer install could take up to 20-25 minutes (including loading data fixtures and some other post-scripts)
- page reloads taking up to 20-30 seconds most of the times.
There were different ways of how the existing developers were dealing with this issue, mac users in my team were using docker-sync to cure the i/o issue, some other were running docker on a vagrant vm and do their development on that virtual machine and transfer files through sftp (this is real!) and some were just living up with the issue, the last group used to create a pr and wait for the jenkins build result to run their tests.
things I’ve tried:
starting off with docker-sync was not very smooth, I created the configuration files needed, installed docker-sync and all its dependencies. Looks like due to some broken symbolic links, the “file monitoring” mechanism was not working, so the sync was crashing without my knowledge. figuring out that my container and my local sources were not synced was quick but not very easy to debug, I thought of any possible issues like cache and … but not the sync.
Anyway I manage to solve the problem. rsync-ing file changes to the container was helpful on improving the speed but it wasn’t that ideal. I experienced few times that the syncing fails when I wake my computer from sleep mode and I had to re-start the docker-sync. That is super annoying to debug when you are not expecting a failure like that.
So I looked around for other options and was able to find some info that suggested docker-machine has better I/O performance compared to docker on mac. This approach looked more like the vagrant solution but much more native.
I went ahead and installed Virtualbox, created a machine, gave it enough resources and changed my environment settings to point my docker to my machine.
right after I started the containers I began to have permission issues, none of the services were able to write anything on the disk. Trying to change the permissions and ownerships within the container was not helpful either.
so if you have permission issues with docker machine you can try grabbing docker-machine-nfs from homebrew with:
brew install docker-machine-nfs
then re-mount the /Users folder while your machine is running via:
<span class="pl-s">docker-machine-nfs yourmachinename --nfs-config=<span class="pl-pds">"</span></span>-alldirs -maproot=0<span class="pl-s"><span class="pl-pds">"</span></span>
that has helped with the problem and I have been enjoying better i/o performance on my containers.
to make sure your docker-commands are executed on the docker machine you will need to run eval $(docker-machine env) every time you open a new tab in your terminal. I do suggest to either create an alias for this or if you are using something like iTerm, create a profile for your docker-machine related stuff and run this command upon shell spawning by placing it in Send Text at Start field.