Ephemeral Dockers as Tool Containers

Screencast – Using the FusionReactor Profiler to find slow code
Signing, Notarizing and Stapling on macOS

Developing tools to automate what you do often is common sense. But what if you want to share your tools with other developers? As soon as you have something that relies on more than a simple script, you’re going to be faced with dependencies and distribution headaches. Docker can solve that.

“Anything you do more than twice should be automated” is a common mantra in the software industry. And it makes sense: doing something once, well that could be considered a one-off. Doing the same thing again? Coincidence. The third time you start the task, it’s time to bite the bullet and take the time to script it.

The resulting script might be a few lines of simple bash. Most developers have Bash installed, and it’s available on almost all platforms. So you’ve got a universal script that’ll run anywhere.

But your task might be more complicated than Bash can handle. Or you might be refining a script which is becoming too complicated to code in simple Bash functions. You decide to write some nice object-oriented Python, or Ruby, or Java, or… some other language. You build the runnable scripts and you put them into Git so everyone can access them.

Soon, the complaints start rolling in:

“I don’t have Python 3 – I have to stick with 2.7 because Fred’s widget-frobulator script requires it.”

“I can’t install that Ruby gem requirement because I have something else that requires an earlier version.”

“I’m not installing $ENVIRONMENT, I’ve got too much stuff installed already.”

all my colleagues, all the time

There must be an easier way to package tooling.

Enter Docker.

Geocoder-in-a-box

Here’s an example: geocoder-in-a-box: takes a single argument and provides geographical information about it.

The script is very simple, but it has two very important requirements: it needs Ruby 2.6.5, and it needs a specific version of Alex Reisner’s Geocoder library. We will additionally need to pass command-line arguments to it. The resulting image should go to a repository so the team can find it.

I’ve put all the code into GitLab. For the sake of the example, I’m going to push to Docker Hub – but you (like us) probably have your own internal Docker repository too.

Code

The bit that does the work:

#!/usr/bin/env ruby
require 'geocoder'

abort 'At least one argument must be supplied:  try a location ("Stuttgart") or an IP address ("139.162.203.138")' unless ARGV.count > 0
result = Geocoder.search( ARGV[0] ).first
puts "  #{ARGV[0]}: #{result.city}, #{result.state}. Lat/Long: #{result.coordinates}"

You can run this from the command line (it’s a runnable Ruby script), but you’ll probably need to gem install geocoder -v 1.5.2 first. You can try it out:

Docker Packaging

Next comes the Dockerfile. This tells Docker how to build an image containing the right version of Ruby, and get our dependency installed too. Here’s the code:

# Docker Tools Demo - rgeo - geolocate something!
#
# Creates a docker image containing a small weather tool, to illustrate
# packing tools as ephemeral dockers.
#
# John Hawksley <john_hawksley@intergral.com>

FROM ruby:2.6.5-alpine
MAINTAINER John Hawksley <john_hawksley@intergral.com>

COPY ./rgeo.rb /rgeo.rb
RUN gem install geocoder -v 1.5.2

ENTRYPOINT ["/rgeo.rb"]

You can build an image by running this, in the same directory as the Dockerfile:

docker build -t rgeo .

The image will be built using the Ruby-2.6.5-alpine base (the FROM directive). This is a compact version of Linux from Alpine Linux, into which Ruby 2.6.5 has been pre-installed.

The COPY directive simply copies our script into the image. This doesn’t have to be a script – it could be its own distributable unit, like a Gem, Egg or Jar. The material being copied must be at the same folder level as Dockerfile or below, and there can be more than one COPY.

The RUN directive installs our dependency – Geocoder 1.5.2 – into the image. Again, multiple RUN directives can appear. It’s not uncommon to see apk or apt package management commands appear here. The main purpose of these commands is build an environment with the right supporting packages and tooling, so your own code can run.

Finally, the magic: ENTRYPOINT. This tells Docker what to actually run, when we run the image. We copied the script into /rgeo.rb (the COPY directive above), and it’s runnable, so we can just run it.

If your tooling requires some special handling (environment variables, for instance, or specific actions pre- and post-run), you might want to COPY in a shell script which does the actually call of your tooling for you.

After the build completes, it should dump out the following lines:

Successfully built 77f1cb7cdf08
Successfully tagged rgeo:latest

Now we can try it out:

It’s working perfectly. We can create a nice alias for this, so that our colleagues don’t have to complain about long-winded Docker commands when they need to geolocate something. Aliases are typically added to developers’ shell startup scripts (.bashrc, .zshrc for example):

alias geo='docker run --rm rgeo:latest'

The ‘--rm‘ option tells Docker not to keep an image of the finished container. This makes the run ephemeral: nothing remains afterwards.

Now we can just use the alias, as if it were a command installed locally. Docker never appears:

Distribution via Docker Hub

To push images to Docker Hub, you’ll need a username and password (go ahead and sort that out, I’ll get a cuppa ☕️)

In your terminal session, log in to the hub using docker login. If everything goes well, you’ll see Login Succeeded.

Docker uses the tag infrastructure to differentiate local images from Hub images. This is done by prepending your Hub username to the image:

docker tag rgeo:latest jhawksleyintergral/rgeo:latest

Finally, push the image by referring to its tag:

docker push jhawksleyintergral/rgeo:latest

That’s it! Your colleagues can then use the full tag name in their aliases, and the image will be pulled from the Hub automatically:

Ephemeral Dockers as Tool Containers

Subsequent calls don’t pull the image again – naturally. They use the cached version:

Conclusion

Docker makes your life easier by providing a simple packaging methodology for your code.

Your colleagues will love you (don’t they already?) because they gain access to awesome (well, you wrote it, so that’s a given) tooling without having to set up a complicated environment.

If you want to know more about Docker, there are loads of tutorials online.. There’s a lot more you can do with it than what we’ve covered here.

Menu