I’ve always had a deep-seated paranoia when it comes to releasing code. It’s a process that instils anxiety. For as long as I can remember, I’ve never been at ease when releasing software. Even after the copious testing, the code reviews, and refactors, I’m worried I’ve missed something. In my current job, this sense of paranoia is particularly heightened as the engineers are—quite rightly—responsible for shipping their code to production. There isn’t a separate team that schedules releases. Developers write the code, test it, and deploy it. We press the button to send our work into the wide world.
This week, I’ve been reflecting on my attitude towards shipping software. This paranoia I experience, whilst at times can cripple one with anxiety, can also be an advantage. When I’m paranoid about a release, I test smarter; I think of new ways to interact with the feature; and draft team members into my thought process. Some of my most successful deployments have resulted from overly paranoid testing.
I recall a recent experience where I was tasked with driving the release of an updated version of a web application. It wasn’t a mere update; it was a rewrite. The difficulty was we wanted to avoid downtime on the site, so we couldn’t just tear down the old site and then deploy the new one. With some DNS magic, this was feasible, but it was an intricate plan to get us to where we wanted to be. Knowing how delicate an operation this was, I involved as many people as I could justify. I put together a runbook for the day, decided on the time and day we were to perform the rollout, and created a task force to collaborate on the day to complete the task. I also simulated the actual release process on a developer environment.
The deployment was completed without a hitch, with no site downtime. The hours of preparation—and the restless night prior—were all worth it.
There’s real value in an overly cautious approach to testing software. The challenge is knowing when the testing is no longer offering value. We can test a feature repeatedly to make us feel better about the process, but we might not be effectively probing the software. The method I use for avoiding this trap is to list out—either in my head or on paper—all the ways I have tested the particular feature. Unit tests, visual regression tests, peer review, manual testing, cross-browser testing, cross-device testing—I note every which way I have verified the changes. Doing this reassures me that, should anything go wrong, I did all I could.
Once you’ve performed this process a few times, it will offer a second benefit: you know the benchmark for this assurance in the future. It’ll become very obvious to you when you attempt to kid yourself on whether or not you’ve done all the verification you can. You’ll know when you’ve cut a corner and the risk it entails. This might add an extra layer of work to the deployment process, but it’s the best approach I can muster for consistently managing the anxiety before a release. I’ll take the extra work in favour of being unable to look myself in the mirror after a failed release.
Onwards.
H.V.E