Docker Volume Mount Permission Denied 2018

The result should be that when docker creates a directory in there for the volume mount (as root), it will be owned and writable by the group 100. If your docker container runs with that GID, then it will be able to create files in there. Then try to mount the NFS share directory. If specifying the NFS client in /etc/exports by domain or hostname ensure the domain name maps to the correct IP, an incorrect entry in /etc/hosts for example could cause access to be denied. In very rare cases, you may have to use the tcpdump to capture tcpdump of the mount operation. When you run docker again on the volume, some files may get re-chowned to root again, or the application therein (i.e. Redis) may even fail because of wrong ownership. So it is a dilemma that I don't have a perfect answer. But you may want to study this docker setup on github that I contributed to, where you can run docker with none-root user.

With a couple of tweaks the WSL (Windows Subsystem for Linux, also known as Bash for Windows) can be used with Docker for Windows.

Quick Jump: Configure Docker for Windows (Docker Desktop)|Install Docker and Docker Compose within WSL|Configure WSL to Connect to Docker for Windows|Ensure Volume Mounts Work

This post only applies to WSL 1!

Check out the WSL 2 Post

Update in 2020: Now that Microsoft has released the Spring 2020 Windows update we have access to WSL 2 on all editions of Windows 10 (including Home). They even backported in support for WSL 2 in Windows versions 1903 and 1909.

I’ve recorded a video of how I have Docker Desktop along with WSL 2 working together along with other tools that I use.

I’ve decided to keep this post unmodified and fully working for WSL 1 in case you want to continue using it. Just know that I’ve moved on to using WSL 2 and that none of the steps below are necessary to do with WSL 2.

This article expects you to have WSL set up already. If you don’t, I have another article that goes over how to set up an amazing WSL based development environment within Windows. You can even run graphical apps and it doesn’t require a VM.

Onwards we go…

While the Docker daemon cannot run directly on WSL, you can use the Docker CLI to connect to a remote Docker daemon running through Docker for Windows or any other VM you create (this article covers both methods).

If you’re wondering “why not just run docker.exe and docker-compose.exe from Docker for Windows directly in WSL?”, that’s due to a bug with running Docker or Docker Compose interactively in that environment. The TL;DR is you can’t run anything in the foreground with interactive mode, which makes it unusable for real web development.

But with the Docker CLI configured to the remote Docker for Windows host it’s really awesome! Using this method, very large Rails applications respond in ~100ms (or ~5s when having to compile 10,000+ lines of Javascript and SCSS). That’s with mounted volumes too!

I use this set up pretty much every day for Rails, Flask, Phoenix, Node and Webpack driven apps. It’s very solid in terms of performance and reliability.

Configure Docker for Windows (Docker Desktop)

In the general settings, you’ll want to expose the daemon without TLS.

Docker for Windows has been recently renamed to Docker Desktop, so if your settings look slightly different than the screenshot, no worries. It’s the same thing.

It mentions “use with caution” because any time you make a network connection that’s not encrypted, it’s worth talking about but in this case it’s completely safe because we’re never connecting to it over a public network.

This is going to allow your local WSL instance to connect locally to the Docker daemon running within Docker for Windows. The traffic isn’t even leaving your dev box since the daemon is only bound to localhost, so not even other machines on your local network will be able to connect. In other words, it’s very safe for this data to be transmit over plain text.

You may also want to share any drives you plan on having your source code reside on. This step isn’t necessary but I keep my code on an internal secondary HD, so I shared my “E” drive too. If you do that, goto the “Shared Drives” setting and enable it.

Can’t use Docker for Windows?

This is only necessary if you are NOT running Docker for Windows!

You’ll want to set up your own VM to run Docker. Docker Tip #73 goes into detail on how to do this, and it even includes links to videos on how to configure the VM.

Install Docker and Docker Compose within WSL

Everyone can follow along at this point!

We still need to install Docker and Docker Compose inside of WSL because it’ll give us access to both CLI apps. We just won’t bother starting the Docker daemon.

The following instructions are for Ubuntu 18.04 / 20.04, but if you happen to use a different WSL distribution, you can follow Docker’s installation guide for your distro from Docker’s installation docs.

Install Docker

Docker Volume Mount Permission Denied

You can copy / paste all of the commands below into your WSL terminal.

Ubuntu 18.04 / 20.04 installation notes taken from Docker’s documentation:

At this point you must close your terminal and open a new one so that you can run Docker without sudo. You might as well do it now!

Install Docker Compose

We’re going to install Docker Compose using PIP instead of the pre-compiled binary on GitHub because it runs a little bit faster (both are still Python apps).

The next step is to make sure $HOME/.local/bin is set on your WSL $PATH.

You can check if it’s already set by running echo $PATH. Depending on what WSL distro you use, you may or may not see /home/nick/.local/bin (replace nick with your username).

Corel draw x7 2019 crack. If it’s there, you’re good to go and can skip to the next section of this post.

If it’s not there, you’ll want to add it to your $PATH. You can do that by opening up your profile file with nano ~/.profile. Then anywhere in the file, on a new line, add export PATH='$PATH:$HOME/.local/bin' and save the file.

Finally, run source ~/.profile to active your new $PATH and confirm it works by running echo $PATH. You should see it there now. Done!

Configure WSL to Connect to Docker for Windows

The next step is to configure WSL so that it knows how to connect to the remote Docker daemon running in Docker for Windows (remember, it’s listening on port 2375).

If you’re not using Docker for Windows and followed Docker Tip #73’s guide to create your own VM then you probably did this already which means you can skip the command below.

Connect to a remote Docker daemon with this 1 liner:

echo 'export DOCKER_HOST=tcp://localhost:2375' >> ~/.bashrc && source ~/.bashrc

That just adds the export line to your .bashrc file so it’s available every time you open your terminal. The source commands reloads your bash configuration so you don’t have to open a new terminal right now for it to take effect.

Verify Everything Works

Ensure Volume Mounts Work

The last thing we need to do is set things up so that volume mounts work. This tripped me up for a while because check this out…

When using WSL, Docker for Windows expects you to supply your volume paths in a format that matches this: /c/Users/nick/dev/myapp.

But, WSL doesn’t work like that. Instead, it uses the /mnt/c/Users/nick/dev/myapp format. Honestly I think Docker should change their path to use /mnt/c because it’s more clear on what’s going on, but that’s a discussion for another time.

To get things to work for now, you have 2 options. If you’re running Windows 18.03 (Spring 2018) or newer you can configure WSL to mount at / instead of /mnt and you’re all done. If you’re running 17.09 (Fall 2017) you’ll need to do something else.

Here’s step by step instructions for both versions of Windows:

Running Windows 10 18.03+ or Newer?

First up, open a WSL terminal because we need to run a few commands.

Create and modify the new WSL configuration file:

We need to set root = / because this will make your drives mounted at /c or /e instead of /mnt/c or /mnt/e.

The options = 'metadata' line is not necessary but it will fix folder and file permissions on WSL mounts so everything isn’t 777 all the time within the WSL mounts. I highly recommend you do this!

Once you make those changes, sign out and sign back in to Windows to ensure the changes take effect. Win + L isn’t enough. You’ll need to do a full blown sign out / sign in.

If you get an error the next time you start your WSL terminal don’t freak out.

It’s a bug with 18.03 and you can easily fix it. Hit CTRL + Shift + ECS to open the task manager, goto the “Services” tab, find the “LxssManager” service and restart it.

This seems to only happen if you sign out of Windows instead of doing a full reboot and will likely be fixed in a future 18.03+ patch.

Once that’s done, you’re all set. You’ll be able to access your mounts and they will work perfectly with Docker and Docker Compose without any additional adjustments. For example you’ll be able to use .:/myapp in a docker-compose.yml file, etc.

What terminal emulator are you using?

If you’re using ConEmu, then you’ll want to make sure to upgrade to the latest alpha release (at least 18.05.06+ which you can see in the title bar of the settings). It contains a patched wslbridge.exe file to support a custom WSL root mount point.

Permission

The default Ubuntu WSL terminal supports this by default, so you’re all good. I don’t know if other terminals support this yet. Let me know in the comments.

You're all done! You can skip the 17.09 steps below if you followed the above steps.

Running Windows 10 17.09?

Mount

First up, open a WSL terminal because we need to run a few commands.

Bind custom mount points to fix Docker for Windows and WSL differences:

You’ll want to repeat those commands for any drives that you shared, such as d or e, etc.

Verify that it works by running: ls -la /c. You should see the same exact output as running ls -la /mnt/c because /mnt/c is mounted to /c.

Mount

At this point you’re golden. You can use volume mount paths like .:/myapp in your Docker Compose files and everything will work like normal. That’s awesome because that format is what native Linux and MacOS users also use.

It’s worth noting that whenever you run a docker-compose up, you’ll want to make sure you navigate to the /c/Users/nick/dev/myapp location first, otherwise your volume won’t work. In other words, never access /mnt/c directly.

Technically you could use a symlink instead of a bind mount, but I’ve been burned in the past when it came to using symlinks and having certain tools not work because they failed to follow them correctly. Better safe than sorry here.

However, feel free to use symlinks inside WSL to access your bind mount. For example my Dev folder lives all the way in /e/Backup/VMs/workstation/home/nick/Dev and there’s no way in heck I’m going to always type that when I want to access my development files.

So inside WSL I created a symlink with ln -s /e/Backup/VMs/workstation/home/nick/Dev ~/Dev and now I can just type cd ~/Dev to access my files and everything works.

Automatically set up the bind mount:

Docker Volume Mount Permission

Unfortunately you will have to run that sudo mount command every time you open a new terminal because WSL doesn’t support mounting through /etc/fstab yet (edit: it does in 18.09+, but if you’re using 18.09+ you should follow the 18.03+ steps).

But we can work around that limitation by just mounting it in your ~/.bashrc file. This is a little dirty but as far as I know, I think this is the only way to do it, so if you know of a better way, please let me know.

You can do that with this 1 liner: echo 'sudo mount --bind /mnt/c /c' >> ~/.bashrc && source ~/.bashrc and make sure to repeat the command for any additional drives you shared with Docker for Windows. By the way, you don’t need to mkdir because we already did it.

Docker Mount Permission Denied

Epson xp 6000 driver for mac. Yes I know, that means you will be prompt for your root password every time you open a terminal, but we can get around that too because Linux is cool like that.

Allow your user to bind a mount without a root password:

To do that, run the sudo visudo command.

That should open up nano (a text editor). Goto the bottom of the file and add this line: nick ALL=(root) NOPASSWD: /bin/mount, but replace “nick” with your username.

That just allows your user to execute the sudo mount command without having to supply a password. You can save the file with CTRL + O, confirm and exit with CTRL + X.

Mission complete. You’re all set to win at life by using Docker for Windows and WSL.

Let me know how it goes in the comments!

When trying to pass data between a Docker container and the host, using ADD in the Dockerfile might be sufficient at first. However, it’s one way, get burned in the image and so very inflexible.

The usual solution is to mount folders using docker‘s -v option. It’s simple, easy to use and pretty reliable. Just add -v '$(pwd):/root' and the current folder will be mounted to the /root folder in the container.

Using volumes is nice because they’re (can be) two way and (can) sync in real-time. Now you don’t need to rebuild your image every time you fix a typo. -v has pretty deep configuration options too, in case you want to go down the rabbit hole.

I prefer going through the looking glass though, using docker-compose. It’s a tool that allows you to easily compose multiple containers into one system. Common setup would be having your app server image, maybe the frontend served from a separate container, and a database and a Redis cache.

When using docker-compose, volumes can be defined in docker-compose.yml. The options are again very rich.

I had a problem though. In my work environment, everyone using Docker uses Mac. However I have a religious opposition to using piles of stinking trash for work, so I run Linux. On Mac, until very recently it was pretty much mandatory to use the docker-sync tool to mount volumes, otherwise you’d face prohibitively slow syncing speeds between the NFS host and containers. And having to wait five minutes for your Rails app to start every time you have to restart the container for some environment change is not fun.

Now I could just as well install docker-sync on Linux too, but I wanted to see if I could mount external volumes with docker-compose without having to change docker-compose.yml (which everyone else uses too).

Shortcut to the conclusion: it’s possible, but it’s hacky. If you don’t juggle around volumes much, it might as well work, but docker-sync is probably a safer, better option. Then again, don’t expect miracles.

So what did I do? For most of the process, be extremely annoyed. It’s plain ridiculous how docker -v, docker-compose‘s volumes and docker volume supposedly deal with the same stuff, yet none of those have any consistency in how or what they expect in their options. It’s horrible “User Experience.”

You can easily mount a folder with docker -vas shown above, and mounting one in your docker-compose.yml is pretty trivial too. So… it shouldn’t be hard to create a named volume just like that too, right? Dream on.

docker volume has basically no options, and it feels more like an API meant for plugin developers than for users. After hours of looking, I still had no idea how to mount a folder from the host’s file system in a container.

Docker Volume Mount Permission Denied 2018

I actually got fed up with trying and resorted to hacking. What I did: create a named volume, extract its mount point from docker volume inspect, delete that mount point and create a symlink there to the location I want mounted. I haven’t done any performance testing nor high throughput syncs, but it seems to work for now just fine. Except you can’t delete the “hacked” named volume, because it craps itself when it sees the symlink. It seems to work without problem across partitions.

Now some code:

I put that into a single sh file and run it to set up my environment. It only needs to be run once (when the volumes are created) and needs sudo because Docker’s stuff (at least on my install) are all owned by root:root permissions 700.

This also means that if some process (like Bundler’s install or Webpack’s packaging) creates files from inside the container, those will show up as owned by root. This might disrupt your git checkouts, as you’ll get permission denied errors on them. Can be resolved by running chown -R youruser:yourgroup . in the folder.

fake_volume accepts two parameters: the first is the name of the volume to hack, and the second is an optional path relative to the current folder where to symlink that volume. It assumes that Docker uses the _data folder (which by default it does currently), so if that were to change in some future version, the script would break.

Or this experiment may break your system. Don’t do it unless you know what you’re doing (and it’s your responsibility).