Building docker containers with make on CoreOS

In this post, I’ll show how you can use a Makefile to build docker containers, and how to orchestrate the builds of containers which have build-time dependencies. Finally, I’ll show you an easy way to install make into CoreOS.

When I began my docker journey, I started out with shell scripts which would prepare the build context before running docker build, then performing cleanups.

Any non-trivial dockerized application will be made up of many containers, and it wasn’t long before these scripts started to get more elaborate, and I felt that using a Makefile would help me accelerate build times by eliminating unchanged steps in the process.

Not only that, a Makefile communicates the intent and the dependencies better to other developers.

I’m not an expert on make, but I’ve picked up a few useful tricks and I’ve boiled them down into this sample (note, Makefiles are finicky about tabs, so I’ve put this up on a gist)

#-----------------------------------------------------------------------------
# configuration - see also 'make help' for list of targets
#-----------------------------------------------------------------------------

# name of container 
CONTAINER_NAME = myregistry.example.com:5000/mycontainer:latest

# list of dependencies in the build context - this example just finds all files
# in the 'files' subfolder 
DEPS = $(shell find myfiles -type f -print)

# name of instance and other options you want to pass to docker run for testing
INSTANCE_NAME = mycontainer
RUN_OPTS =

#-----------------------------------------------------------------------------
# default target
#-----------------------------------------------------------------------------

all   : ## Build the container - this is the default action
all: build

#-----------------------------------------------------------------------------
# build container
#-----------------------------------------------------------------------------

.built: . $(DEPS)
	docker build -t $(CONTAINER_NAME) .
	@docker inspect -f '{{.Id}}' $(CONTAINER_NAME) > .built

build : ## build the container
build: .built
 
clean : ## delete the image from docker
clean: stop
	@$(RM) .built
	-docker rmi $(CONTAINER_NAME)

re    : ## clean and rebuild
re: clean all

#-----------------------------------------------------------------------------
# repository control
#-----------------------------------------------------------------------------

push  : ## Push container to remote repository
push: build
	docker push $(CONTAINER_NAME)

pull  : ## Pull container from remote repository - might speed up rebuilds
pull:
	docker pull $(CONTAINER_NAME)

#-----------------------------------------------------------------------------
# test container
#-----------------------------------------------------------------------------

run   : ## Run the container as a daemon locally for testing
run: build stop
	docker run -d --name=$(INSTANCE_NAME) $(RUN_OPTS) $(CONTAINER_NAME)

stop  : ## Stop local test started by run
stop:
	-docker stop $(INSTANCE_NAME)
	-docker rm $(INSTANCE_NAME)

#-----------------------------------------------------------------------------
# supporting targets
#-----------------------------------------------------------------------------

help  : ## Show this help.
	@fgrep -h "##" $(MAKEFILE_LIST) | fgrep -v fgrep | sed -e 's/\\$$//' | sed -e 's/##//'

.PHONY : all build clean re push pull run stop help

Once this is configured and sitting alongside your Dockerfile, all you need to to is run make and it will build your container. If you’re feeling confident, you can `make push` to build and push in one step.

The default build target depends on a file called .built. You can see that .built depends on the current directory, plus any files defined in the DEPS variable. If any of those dependencies are newer than the build, then docker build will be executed, and the .built file is updated with a new timestamp.

This file also has a push target, which ensures the build is up to date, and pushes to your remote registry.

There’s also some simple targets for run and stop so you can quickly test a container after building.

Finally, there’s a neat help target, which simply parses the comments preceded by ## to provide some simple help on the available targets!

Orchestrating multiple container builds

I set up one Makefile for each container, then higher up I have a simpler Makefile which I can use to build everything, or selective subsets.

all: common web
	$(MAKE) -C myapp-mysql
	$(MAKE) -C myapp-redis

web: common
	$(MAKE) -C myapp-web-base
	$(MAKE) -C myapp-web

common:
	$(MAKE) -C myapp-common

.PHONY : all web common

Pretty simple – if I want to just build the web containers, I make web, and it will first ensure the common containers are up to date before running the Makefiles for myapp-web-base followed by myapp-web

How to install make on CoreOS

If you’re using CoreOS for your development environment, you might be thinking “but I don’t have make, and no means to install it!”. Well here is a one-liner to brighten your day

docker run -ti --rm -v /opt/bin:/out ubuntu:14.04 \
  /bin/bash -c "apt-get -y install make && cp /usr/bin/make /out/make"

That just runs a temporary ubuntu container, installs make, then copies out of the container into the host /opt/bin directory.

Credits

Thanks to Guillaume Charmes for his blog post on docker Makefiles, and to Payton White for his neat one-liner for adding help to Makefiles.

Setting up a secure etcd cluster

etcd is a highly available key-value store for performing service discovery and storing application configuration. It’s a key component of CoreOS – if you set up a simple CoreOS cluster you’ll wind up with etcd running on each node in your cluster.

One of the appealing things about etcd is that its API is very easy to use – simple HTTP endpoints delivering easily consumable JSON data. However, by default it’s not secured in any way.

etcd supports TLS based encryption and authentication, but the documentation isn’t the easiest to follow. In this post, I’ll share my experience of setting up a secured etcd installation from scratch.

Let’s build an etcd cluster than spans 3 continents!

Untitled
I’m going to walk through how you could build a highly available etcd cluster using 3 cheap Digital Ocean machines in London, New York and Singapore. This cluster will tolerate the failure of any one location. You could throw in San Francisco and Amsterdam and tolerate *two* failures. I’ll leave that as an exercise for the reader!

I’m going to demonstrate this using Ubuntu 15.04 rather than CoreOS – that’s simply because I wanted to learn about etcd without having CoreOS perform any configuration for me.

Ladies and gentlemen, start your engines!

Fire up 3 Ubuntu 15.04 machines. The only reason I chose 15.04 is because I wanted to use systemd, but you should be able to use whatever you prefer. If you’re not already a Digital Ocean customer, use this referral link for a $10 credit – that’ll let you play with this setup for a couple of weeks.

Each machine need only be their most basic $5/mo offering – so go ahead and create a machine in London, New York and Singapore.

You need to know their IPs and domain names – for the rest of this post I’ll refer to them as ETCD_IP1..3 and ETCD_HOSTNAME1..3. Note that you don’t need to set up DNS entries, you just need the name to create the certificate signing request for each host.

Creating a certificate authority

To create the security certificates we need to set up a Certificate Authority (CA). There’s a tool called etcd-ca we can use do this.

There’s no binary releases of etcd-ca available, but it’s fairly straightforward to build your own binary in a golang docker container.

#get a shell in a golang container
docker run -ti --rm -v /tmp:/out golang /bin/bash 

#build etcd-ca and copy it back out
git clone https://github.com/coreos/etcd-ca
cd etcd-ca
./build
cp /go/etcd-ca/bin/etcd-ca /out
exit

#now we have etcd-ca in /tmp ready to copy wherever we need it
cp /tmp/etcd-ca /usr/bin/

Now we can initialise our CA. To keep things simple, I’ll use an empty passphrase

etcd-ca init --passphrase ''

This will setup the CA and store its key in .etcd-ca – you can change where etcd-ca stores such data with the –depot-path option.

Create certificates

Now we have a CA, we can create all the certificates we need for our cluster.

etcd-ca new-cert --passphrase '' --ip $ETCD_IP1 --domain $ETCD_HOSTNAME1 server1
etcd-ca sign --passphrase '' server1
etcd-ca export --insecure --passphrase '' server1 | tar xvf -
etcd-ca chain server1 > server1.ca.crt

etcd-ca new-cert --passphrase '' --ip $ETCD_IP2 --domain $ETCD_HOSTNAME2 server2
etcd-ca sign --passphrase '' server2
etcd-ca export --insecure --passphrase '' server2 | tar xvf -
etcd-ca chain server2 > server2.ca.crt

etcd-ca new-cert --passphrase '' --ip $ETCD_IP3 --domain $ETCD_HOSTNAME3 server3
etcd-ca sign --passphrase '' server3
etcd-ca export --insecure --passphrase '' server3 | tar xvf -
etcd-ca chain server3 > server3.ca.crt

The keys and certificates are retained in the depot directory, but the export will have created the files we need on each of our etcd servers as serverX.crt and serverX.key.insecure. We also create a CA chain in serverX.ca.crt

We also need a client key which we’ll use with etcdctl. etcd will reject client requests if they aren’t using a certificate signed by your CA, which is how we’ll be preventing unauthorized access to the etcd cluster.

etcd-ca new-cert  --passphrase '' client
etcd-ca sign  --passphrase '' client
etcd-ca export --insecure  --passphrase '' client | tar xvf -

This will leave us with client.crt and client.key.insecure

Setting up each etcd server

Here’s how we set up server 1. First, we install etcd

#install curl and ntp to keep our clock in sync
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get -y install curl ntp

#now grab binary release of etcd
curl -L  https://github.com/coreos/etcd/releases/download/v2.1.0-alpha.0/etcd-v2.1.0-alpha.0-linux-amd64.tar.gz -o etcd.tar.gz
tar xfz etcd.tar.gz

#install etcd and etcdctl, then clean up
cp etcd-v*/etcd* /usr/bin/
rm -Rf etcd*

#create a directory where etcd can store persistent data
mkdir -p /var/lib/etcd

Copy the server1.crt, server1.key.insecure, server1.ca.crt we created earlier to /root. Now we’ll create a systemd unit which will start etcd in /etc/systemd/system/etcd.service

[Unit]
Description=etcd
After=network.target

[Install]
WantedBy=multi-user.target

[Service]
#basic config
Environment=ETCD_DATA_DIR=/var/lib/etcd
Environment=ETCD_NAME=etcd1
Environment=ETCD_LISTEN_PEER_URLS=https://$ETCD_IP1:2380
Environment=ETCD_LISTEN_CLIENT_URLS=https://$ETCD_IP1:2379
Environment=ETCD_ADVERTISE_CLIENT_URLS=https://$ETCD_IP1:2379

#initial cluster configuration
Environment=ETCD_INITIAL_CLUSTER=etcd1=https://$ETCD_IP1:2380,etcd2=https://$ETCD_IP2:2380,etcd3=https://$ETCD_IP3:2380
Environment=ETCD_INITIAL_CLUSTER_TOKEN=your-unique-token
Environment=ETCD_INITIAL_CLUSTER_STATE=new
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=https://$ETCD_IP1:2380

#security
Environment=ETCD_TRUSTED_CA_FILE=/root/server1.ca.crt
Environment=ETCD_CERT_FILE=/root/server1.crt
Environment=ETCD_KEY_FILE=/root/server1.key.insecure
Environment=ETCD_CLIENT_CERT_AUTH=1

Environment=ETCD_PEER_TRUSTED_CA_FILE=/root/server1.ca.crt
Environment=ETCD_PEER_CERT_FILE=/root/server1.crt
Environment=ETCD_PEER_KEY_FILE=/root/server1.key.insecure
Environment=ETCD_PEER_CLIENT_CERT_AUTH=1

#tuning see https://github.com/coreos/etcd/blob/master/Documentation/tuning.md
Environment=ETCD_HEARTBEAT_INTERVAL=100
Environment=ETCD_ELECTION_TIMEOUT=2500

ExecStart=/usr/bin/etcd
Restart=always

The etcd documentation recommends setting the election timeout to around 10x the ping time. In my test setup, I was seeing 250ms pings from London to Singapore, so I went for a 2500ms timeout.

It should be clear how to adjust that unit for each server – note that the ETCD_INITIAL_CLUSTER setting is the same for each server, and simply tells etcd where it can find its initial peers.

Now we can tell the system about our new unit and start it

systemctl daemon-reload
systemctl enable etcd.service
systemctl restart etcd.service

Do that on all three servers and you’re up and running!

Setting up etcdctl

We can set up some environment variables on the server so that etcdctl uses our client certificate. Copy the client.crt to /root and create this file in /etc/profile.d/etcd.sh so that you have these environment variables on each login.

export ETCDCTL_CERT_FILE=/root/client.crt
export ETCDCTL_KEY_FILE=/root/client.key.insecure
export ETCDCTL_CA_FILE=/root/server1.ca.crt
export ETCDCTL_PEERS=https://$ETCD_IP1:2379,https://$ETCD_IP2:2379,https://$ETCD_IP3:2379

Log back in and you should be able to play with etcdctl

etcdctl set /foo bar
etcdctl get /foo

Here’s how you could talk to a specific node with curl

curl --cacert /root/server1.ca.crt \
--cert /root/client.crt \
--key /root/client.key.insecure \
-L https://$ETCD_IP1:2379/v2/keys/foo

What next?

As it stands, you could use this setup as a secure for replacement for https://discovery.etcd.io to bootstrap a CoreOS cluster. You could also use this as the basis for a CoreOS cluster which is distributed across multiple datacentres.

While exploring this, I found the follow pages useful

Dynamic configuration of Doctrine and other services in Symfony

In this post, I illustrate how Symfony’s Expression Languange can be used to dynamically configure services at runtime, and also show how to replace the Doctrine bundle’s ConnectionFactory to provide very robust discovery of a database at runtime.

Traditionally, Symfony applications are configured through an environment. You can have different environments for development, staging, production – however many you need. But traditionally, these environments are assumed to be static. Your database server is here, your memcache cluster is there.

If you’ve bought into the 12 factor app mindset, you’ll want to discover those things at runtime through a service like etcd, zookeeper or consul.

The problem is, the Symfony dependency injection container gets compiled at runtime with a read-only configuration. You could fight the framework and trash the cached container to trigger a recompilation with new parameters. That’s the nuclear option, and thankfully there are better ways.

Use the Symfony Expression Language

Since Symfony 2.4, the Expression Language provides the means to configure services with expressions. It can do a lot more besides that – see the cookbook for examples – but I’ll focus on how it can be used for runtime configuration discovery.

service.yml for a dynamic service

As an example, here’s how we might configure a standard MemcachedSessionHandler
at runtime. The arguments to the session.handler.memcache service are an expression which will call the getMemcache() method in our myapp.dynamic.configuration service at runtime…

services:
    #set up a memcache handler service with an expression...
    session.handler.memcache:
        class: Symfony\Component\HttpFoundation\Session\Storage\Handler\MemcachedSessionHandler
        arguments: ["@=service('myapp.dynamic.configuration').getMemcached()"]

    #this service provides the configuration at runtime
    myapp.dynamic.configuration:
        class: MyApp\MyBundle\Service\DynamicConfigurationService
        arguments: [%discovery_service_endpoint%, %kernel.cache_dir%]

Your DynamicConfigurationService can be configured with whatever it needs, like where to find a discovery service, and perhaps where it can cache that information. All you really need to focus on now is making that getMemcached() as fast as possible!

class DynamicConfigurationService
{
    public function __construct($discoveryUrl, $cacheDir)
    {

    }

    public function getMemcached()
    {       
        $m = new Memcached();
        $m->setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT);

        //discover available servers from cache or discovery service like
        //zookeeper, etcd, consul etc...
        //$m->addServer('10.0.0.1', 11211);

        return $m;
    }
}

In a production environment, you’ll probably want to cache the discovered configuration with a short TTL. It depends how fast your discovery service is and how rapidly you want to respond to changes.

Dynamic Doctrine Connection

Using expressions helps you configure services with ‘discovered’ parameters. Sometimes though, you want to be sure they are still valid and take remedial action if not. A good example is a database connection.

Let’s say you store the location of a database in etcd, and the location of the database changes. If you’re just caching the last-known location for a few minutes, you’ve got to wait for that to time out before your app starts working again. That’s because you’re not doing any checking of the values after you read them.

In the case of a database, you could try making a connection in something like the ` DynamicConfigurationService` example above. But we don’t expect the database to change often – it might happen one-in-a-million requests. Why burden the application with unnecessary checks?

In the case of Doctrine, what you can do is provide your own derivation of the ConnectionFactory from the Doctrine Bundle.

We’ll override the createConnection to obtain our configuration, call the parent, and retry a few times if the parent throws an exception….

class MyDoctrineConnectionFactory 
   extends \Doctrine\Bundle\DoctrineBundle\ConnectionFactory
{
    protected discoverDatabaseParams($params)
    {
        //    discover parameters from cache
        // OR
        //    discover parameters from discovery service
        //    cache them with a short TTL
    }
 
    protected clearCache($params)
    {
        // destroy any cached parameters
    }

    public function createConnection(
        array $params, 
        Configuration $config = null, 
        EventManager $eventManager = null, 
        array $mappingTypes = array())
    {
        //try and create a connection
        $tries = 0;
        while (true) {
            //so we give it a whirl...
            try {
                $realParams=$this->discoverDatabaseParams($params);
                return parent::createConnection($realParams, $config, $eventManager, $mappingTypes);
            } catch (\Exception $e) {
                //forget our cache - it's broken, and let's retry a few times
                $this->clearCache($params);
                $tries++;
                if ($tries > 5) {
                    throw $e;
                } else {
                    sleep(1);
                }
            }
        }
    }
}

To make the Doctrine bundle use our connection factory, we must set the doctrine.dbal.connection_factory.class parameter to point at our class…

parameters:
     doctrine.dbal.connection_factory.class: MyCompany\MyBundle\Service\MyDoctrineConnectionFactory

So we’re not adding much overhead – we pull in our cached configuration, try to connect, and if it fails we’ll flush our cache and try again. You can add a short sleep between retry attempts, depending on what your database failover characteristics are.

Know any other tricks?

If you’ve found this post because you’re solving similar problems, let me know and I’ll add links into this post.