My current client has a Rails application that is tightly coupled to Nginx. This sort of coupling is fairly common and used to avoid static files being served by the Rails stack. The files most notable in this respect are the applications assets. The assets are created as part of the application using a command like rake assets:precompile
. Those assets are specific to the particular version of the code.
I had previously used docker’s --volumes-from
to share files between containers sidecar-style and it worked quite well. To my surprise I found that Kubernetes does not have a direct analog to this feature. Kubernetes offers various types of volumes and the one that comes closest is Kubernetes emptyDir. However, emptyDir
in Kubernetes is not the same as docker’s --volumes-from
.
In Docker, the --volumes-from
connects directly to the space shared by another container via --volume
. To be exact if you run a container with docker run --name app --volume /app/assets <app image>
you can then access the data in /app/assets
from another container as follows docker run --name nginx --volumes-from app:/app/assets <nginx image>
. This will allow the nginx
container to have direct access to the assets from app
container. Kubernete’s emptyDir
does not create such a direct link. Instead it creates an empty directory (yeah, the name kinda hints at that), which is then made available in the containers. Here is what it looks like:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: test
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
# here we set up the empty directory
volumes:
- name: shared-assets
emptyDir: {}
containers:
- name: app
image: app
volumeMounts:
- name: shared-assets
mountPath: /app/assets
# the nginx container
- name: nginx
image: nginx
volumeMounts:
- name: shared-assets
mountPath: /app/assets
On the surface this looks very similar. Indeed, it seems that the Kubernetes emptyDir approach has the added benefit of allowing the mounts to be in different places in the container. For example, you could change the nginx
to mount the shared-assets in /flubble/monkey
instead. This isn’t doable with the Docker approach.
The issue arises if you have data in the location to which you connect the emptyDir
. Assume that you already have the files to share in /app/assets
. When you create the emptyDir
and attach it to the app container, it will have the same effect as mounting on top of a directory that contains data. This is the same behavior as you’ll see on the Linux command line. Essentially, the existing files are masked and become hidden. This is clearly not desirable.
An easy way to get around this limitation is to copy the shared files into the emptyDir
location. The above deployment would then look like this:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: test
name: test
spec:
replicas: 1
template:
metadata:
labels:
app: test
spec:
# here we set up the empty directory
volumes:
- name: shared-assets
emptyDir: {}
containers:
- name: app
image: app
volumeMounts:
- name: shared-assets
### the new location
mountPath: /app/shared-assets
# the nginx container
- name: nginx
image: nginx
volumeMounts:
- name: shared-assets
mountPath: /app/assets
This will link /app/shared-assets
to the emptyDir
in the app
container, which is then visible in the nginx
container under /app/assets
. This will still begin empty. To populate this space you will need to copy the assets in the app
container from where they naturally live to the location linked to the emptyDir
. The following is sufficient for this:
cp -r /app/assets/ /app/shared-assets/
This is a pretty easy way to create sidecar, which accesses files from the other container. It does however add an extra step and time to the container startup. Depending on the size of the space needing to be copied, it may or may not be a reasonable approach for your use case. There are several other alternatives.
nginx
container depending on the use case.In my tightly coupled use case, which required additional rewriting etc in addition to the use case, this was the easiest way forward. It also provided great parity to how the application was being run outside of Kubernetes. It is difficult to overstate the value of such familiarity when many other big changes are underway. Resource consumption was also very low and made this an easy choice. We may consider the alternatives in the future, but for now this is proving a solid approach.