People are addicted to the idea that there’s a simple answer for everything, and in our industry, that answer is currently immutability. The basic idea is that mutable state is a common source of defects in computer programs, and use of immutable objects can result in much simpler code with less defects (especially when it comes to concurrent programming).
Of course, the law of the instrument means people will want to use immutability to solve problems outside the world of software architecture too. Hence, we now have “immutable infrastructure”: the idea that you should never make changes to a running machine. If something needs to be changed, you should build a completely new machine, swap it in place of your old one, then destroy the old one.
This idea comes from the fact that it’s impossible to manage every single detail of a machine using current configuration management tools. This leaves open the possibility for unexpected changes on long-running machines, which can lead to problems caused by gradual configuration drift.
The Limits of Immutability
Unfortunately, with all our focus on the benefits of immutability, some of us seem to be missing the fact that immutability is only beneficial to a point. In software, pure immutability gives you nothing except a hot CPU. You eventually need to mutate the system in some way to do anything useful, even if it’s just to print something on a screen.
Similarly, stateless application servers probably aren’t very useful on their own, and since there’s no such thing as an immutable database, immutability is probably not something you’d want to enforce across your entire infrastructure. This means you end up with two different ways of managing your infrastructure. The immutable approach will work for truly stateless services, but you probably still need a way to manage long-running, stateful services anyway. So have you really solved a problem, or have you just made things more complicated?
Why Disposable is Better
Treating your infrastructure as disposable means accepting configuration drift as a possibility, then concentrating on making sure you can easily rebuild any machine from scratch when you want to. This is a more traditional way of using configuration management, and my preference for it basically boils down to having a consistent and more flexible way to manage my entire infrastructure.
For example, in the case of small, infrastructure-wide configuration changes, an immutable approach would require you to destroy and rebuild your entire infrastructure. On the other hand, a disposable approach means you can simply update your configuration management code and let the change slowly roll out on its own. Treating your infrastructure as disposable means you’re never forced to destroy and rebuild machines if you don’t want to, but you can still do it any time you need to.
Although I understand the problems immutable infrastructure is trying to solve, I frankly have yet to experience them in any significant way. My sense is that when you’re working with good people who know how to use their tools, problems caused by configuration drift should be rare. And even if these problems do occur, proper use of configuration management means there’s nothing stopping you from destroying and rebuilding machines any time you want. And that’s my main complaint about immutable infrastructure. Its ideas seem to be purely subtractive; removing tools from my toolbox as if I can’t be trusted to use them responsibly.
Incidentally, when I see people going as far as suggesting things like disabling SSH and using random/obscure hostnames to make logging into individual machines more difficult, I have to wonder if they’ve ever had to troubleshoot anything in a real production environment (Hint: A centralized logging architecture is great, but by no means a substitute for tools like