Posted: February 18, 2016 in Work Stuff
Building a platform to demonstrate a solution or concept is always a problem. Generally the platform has evolved over the course of the development, but then ensuring that the platform is either stable enough for demonstrations or for a new version to be deployed requires time and effort.


Once our involvement is finished – how long do we need to maintain it for? Do we hand it off to another team? Who needs access, who gets to ensure that the platform is still working X weeks, months or even years later?

It is these concerns that often end up with staff spending considerable amounts of time doing nothing but platform maintenance to ensure that  the various projects are still available. We’ve all been in a situation where weeks after a long running platform is finally retired, someone asks “can we just…?”

So what happens if we shutter a platform as it is deemed end-of-life, but sometime later another part of the business needs something similar? In these scenarios, it is often difficult to just turn on the old solution – software has moved on, libraries have been updated and no longer work in the same way they did when we originally built the solution. Old and new versions  of important libraries may not work side-by-side increasing the potential for conflicts with current projects.

Systems like Jenkins allow us to co-ordinate the deployment and testing of a solution, but even the simplest task in Jenkins requires considerable setup time and a stable end point to deploy to. This is fine for current projects, but for historic projects it isn’t as much use.

A simple solution to this issue is Docker. Docker is a method for deploying applications within a software container that can theoretically be run on any computer. This means that we can specify the underlying components that our solution uses and then run our solution in its own self-contained box.

Shuttering a project means simply stopping the running container. If an archived project needs to be re-instated, we can just spin up the associated Docker container and use it again.

The trouble is that building a new Dockerfile for each platform becomes another maintenance task – much, much smaller than full-on server maintenance, but still something that needs a dedicated team member to undertake. An undertaking that is required almost as soon as the project starts so that the rest of the team have something to develop against.

We then have issues with ensuring that everyone in the team is developing against the correct version of the Docker container and how do we ensure that the changes needed by one part of the project are then reflected in the latest Docker build? Who is responsible for ensuring that the newest requirements are included in the Docker container?

The development platforms that the team uses for most of its projects are primarily Clojure with NodeJS/Clojurescript. Occasional forays into pure Java, Python or Ruby are also possible – if not for the entire project, but perhaps for sub tasks such as testing or bundling of apps into their final version. Like most dev-teams we also rely on a source control system (git) to look after versioning and distribution of code between team members.

Introducing Uberany

The solution we came up with is a single universal Docker container that is capable of running Java, Clojure, NodeJS, Python, Ruby and git. Instantiating the container involves telling it which git repository we want to checkout, the user/password combination necessary to access the repo and any other information such as the ports that the app needs opened.

The container is setup so that on start it will automatically checkout the requested git repo to an internal directory and then look for a bash script called ‘build.sh’. This file controls the environmental setup within the container if it is needed – e.g. in the case of Node, this can be running the ‘npm install’ command to ensure that all the correct node-modules are present. The build.sh can also be used to test the application prior to running it.

Once the build step is completed, the container will then look for a ‘run.sh’ that contains the code necessary to actually start the app. Once that is done, your app is now deployed and running in its own Docker container.


A build script:


npm install


A run script:

node app.js


Starting the Docker:
docker run -t -i --name ????? -e giturl=??? -e username=XXXX -e password=YYYY uberany
docker run -d --name ????? -e giturl=??? -e username=XXXX -e password=YYYY uberany
for daemon/background mode (-d instead of -t -i).
Anyway, the git repo for the Dockerfile is here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s