Monthly Archives: February 2015

Dynamic configuration of Doctrine and other services in Symfony

In this post, I illustrate how Symfony’s Expression Languange can be used to dynamically configure services at runtime, and also show how to replace the Doctrine bundle’s ConnectionFactory to provide very robust discovery of a database at runtime.

Traditionally, Symfony applications are configured through an environment. You can have different environments for development, staging, production – however many you need. But traditionally, these environments are assumed to be static. Your database server is here, your memcache cluster is there.

If you’ve bought into the 12 factor app mindset, you’ll want to discover those things at runtime through a service like etcd, zookeeper or consul.

The problem is, the Symfony dependency injection container gets compiled at runtime with a read-only configuration. You could fight the framework and trash the cached container to trigger a recompilation with new parameters. That’s the nuclear option, and thankfully there are better ways.

Use the Symfony Expression Language

Since Symfony 2.4, the Expression Language provides the means to configure services with expressions. It can do a lot more besides that – see the cookbook for examples – but I’ll focus on how it can be used for runtime configuration discovery.

service.yml for a dynamic service

As an example, here’s how we might configure a standard MemcachedSessionHandler
at runtime. The arguments to the session.handler.memcache service are an expression which will call the getMemcache() method in our myapp.dynamic.configuration service at runtime…

services:
    #set up a memcache handler service with an expression...
    session.handler.memcache:
        class: Symfony\Component\HttpFoundation\Session\Storage\Handler\MemcachedSessionHandler
        arguments: ["@=service('myapp.dynamic.configuration').getMemcached()"]

    #this service provides the configuration at runtime
    myapp.dynamic.configuration:
        class: MyApp\MyBundle\Service\DynamicConfigurationService
        arguments: [%discovery_service_endpoint%, %kernel.cache_dir%]

Your DynamicConfigurationService can be configured with whatever it needs, like where to find a discovery service, and perhaps where it can cache that information. All you really need to focus on now is making that getMemcached() as fast as possible!

class DynamicConfigurationService
{
    public function __construct($discoveryUrl, $cacheDir)
    {

    }

    public function getMemcached()
    {       
        $m = new Memcached();
        $m->setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT);

        //discover available servers from cache or discovery service like
        //zookeeper, etcd, consul etc...
        //$m->addServer('10.0.0.1', 11211);

        return $m;
    }
}

In a production environment, you’ll probably want to cache the discovered configuration with a short TTL. It depends how fast your discovery service is and how rapidly you want to respond to changes.

Dynamic Doctrine Connection

Using expressions helps you configure services with ‘discovered’ parameters. Sometimes though, you want to be sure they are still valid and take remedial action if not. A good example is a database connection.

Let’s say you store the location of a database in etcd, and the location of the database changes. If you’re just caching the last-known location for a few minutes, you’ve got to wait for that to time out before your app starts working again. That’s because you’re not doing any checking of the values after you read them.

In the case of a database, you could try making a connection in something like the ` DynamicConfigurationService` example above. But we don’t expect the database to change often – it might happen one-in-a-million requests. Why burden the application with unnecessary checks?

In the case of Doctrine, what you can do is provide your own derivation of the ConnectionFactory from the Doctrine Bundle.

We’ll override the createConnection to obtain our configuration, call the parent, and retry a few times if the parent throws an exception….

class MyDoctrineConnectionFactory 
   extends \Doctrine\Bundle\DoctrineBundle\ConnectionFactory
{
    protected discoverDatabaseParams($params)
    {
        //    discover parameters from cache
        // OR
        //    discover parameters from discovery service
        //    cache them with a short TTL
    }
 
    protected clearCache($params)
    {
        // destroy any cached parameters
    }

    public function createConnection(
        array $params, 
        Configuration $config = null, 
        EventManager $eventManager = null, 
        array $mappingTypes = array())
    {
        //try and create a connection
        $tries = 0;
        while (true) {
            //so we give it a whirl...
            try {
                $realParams=$this->discoverDatabaseParams($params);
                return parent::createConnection($realParams, $config, $eventManager, $mappingTypes);
            } catch (\Exception $e) {
                //forget our cache - it's broken, and let's retry a few times
                $this->clearCache($params);
                $tries++;
                if ($tries > 5) {
                    throw $e;
                } else {
                    sleep(1);
                }
            }
        }
    }
}

To make the Doctrine bundle use our connection factory, we must set the doctrine.dbal.connection_factory.class parameter to point at our class…

parameters:
     doctrine.dbal.connection_factory.class: MyCompany\MyBundle\Service\MyDoctrineConnectionFactory

So we’re not adding much overhead – we pull in our cached configuration, try to connect, and if it fails we’ll flush our cache and try again. You can add a short sleep between retry attempts, depending on what your database failover characteristics are.

Know any other tricks?

If you’ve found this post because you’re solving similar problems, let me know and I’ll add links into this post.

Blue-Turquoise-Green Deployment

In this post I’m putting a name to something I’ve found myself doing in order to deliver zero-downtime deployments without any loss of database consistency.

The idea of Blue-Green deployment is well-established and appealing. Bring up an entire new stack when you want to deploy, and when you’re ready, flip over to it at the load balancer.

Zero downtime deployment. It makes everyone happy.

But…data synchronization is hard

Cloud environments make it easy to bring up a new stack for blue-green deployments. What’s not so easy is dealing with transactions during the flip from blue to green.

During that time, some of your blue services might be writing data into the blue database, and on a subsequent request, trying to read it out of green.

You either have to live with a little inconsistency, or drive yourself crazy trying to get it synchronized.

What about a common database?

This won’t suit all applications, but you can do blue-green deployment with a common data storage backend. The actual blue and green elements of the stack consist of application code and any data migration upgrade/downgrade handling logic.

Most of the time, if you’re trying to push out frequent updates, those updates are software changes with infrequent database schema changes.

So, you can happily make several releases a day with zero downtime. However, sooner or later you’re going to make a breaking schema change.

The horror of backwards incompatible schema changes

So, you’re barrelling along with your shared data backend, and you find the current live blue deployment will fail when the new green deployment performs its database migrations on your common data store.

Now you can’t deploy green without a scheduled downtime.

But you don’t want scheduled downtime! How can we do a zero downtime deployment and still retain the green-blue rollback capability?

Introducing Blue-Turquoise-Green deployment!

You need to create a turquoise stack. That’s the blue release, patched to run without failure on both a blue and a green database schema. This means it might have to detect the availability of certain features and adapt its behaviour at runtime. It might look ugly, but you’re not planning to keep it for long.
Diagram illustrating how 'turquoise' stack allows zero-downtime deployment on a shared data store

Now, you can perform a deployment of turquoise. It runs just fine on the blue database, and you can run the database migrations for green. It keeps on trucking. Now you’re safely running blue on a green-compatible database, you go ahead and deploy the green stack.

If you do run into problems, you’ve got everything in place to downgrade. Flip from green back to turquoise. Revert the database migrations, and you can then flip from turquoise to blue, and you’re back where you started.

Thinking in turquoise

For me, this has been more of a thought experiment. I’ve found that if you plan to do blue-green deployment on a shared data backend, you naturally adopt a ‘turquoise’ mindset to the migrations.

That means ensuring you design schema changes carefully, and deploy them in advance of the code which actually requires them. In order words, you build in that turquoise-coloured forward compatibility ahead of time, and you’re back to low risk, blue-green deployments!

Finally, why turquoise?

Because turquoise is a much nicer word than cyan. I should also say that I don’t claim this is a new idea. Giving a name to things makes it easier to discuss with others – I was trying to describe this approach to someone and wrote this as a result. Comments are welcome.

Sending signals from one docker container to another

Sometimes it’s useful to send a signal from one container to another. In a previous post, I showed how to run confd and nginx in separate containers. In that example, the confd container used the docker client to send a HUP to nginx.

To do this, the confd container had the full docker installation script run on it. That works, but adds a lot of needless bulk. You can also run into problems if the docker client you install is newer than the server you’re pointing it at.

But there’s an easier way – we can just make docker API calls using HTTP through its unix domain socket.

Step 1 – share /var/run/docker.sock from the host into the container

The socket we need is in /var/run/docker.sock, and we can share that when we launch the container with docker run -v /var/run/docker.sock:/var/run/docker.sock ...

Step 2 – send HTTP through the socket

Here’s a handy gist by Everton Ribeiro which shows various ways of doing this. Also note that the latest release of curl (7.40) has support for using a unix domain socket too.

I used netcat – here’s a simple example which should produce a result….

echo -e "GET /images/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock

Check out the docker API documentation for more calls you can make. For example, let’s see how we can send a signal to another container.

echo -e "POST /containers/nginx/kill?signal=HUP HTTP/1.0\r\n" | \
nc -U /var/run/docker.sock

Brilliant! We can communicate with docker and we didn’t need to install anything else to do it!