Push the boundaries! Responsibly.

Recently, I had a colleague of mine say that they had taken “I took a lower overall pay scale to come to <my company> simply because I wanted to be part of something that matters.” This was very refreshing to hear as my company is going through a phase of great attrition due to many factors, not the least of which is a tight-fisted lock on what can be developed and deployed. Many people have become disillusioned with executive leadership decisions because they appear to be blocking progress to delivering real value to ensure stability.

This has been sad to see as I joined my current company for the exact same reason and with the exact same sacrifice in compensation in order to make change that I believe this company is fantastically positioned to affect. Many of the individuals that have been the most influential in making real, substantive, progress toward our end-goal, have left for other opportunities over the past 6-12 months as a result. Most, to other companies in the same or related industries that they believe have a clear, concise vision and the will and motivation to take risks in order to push forward.

Where Do I Stand?

My feeling, while it may seem like a cheat, is somewhere on the fence. I do believe you must have stability first in order to make clients confident in your product. Confidence drives adoption and adoption can drive the change that you are looking to create. However, in order to succeed in making change, you must stretch the boundaries of the known and take risks. This is the only true path to innovation. It’s impossible to innovate while staying in your comfort zone. In my experience, true innovation happens when you have one foot in the known and the other in the unknown.

Stability and Innovation Do Not Need to Be Mutually Exclusive

So, how can you push your boundaries while also maintaining a stable product that customers will want to use and recommend? Well, current technologies have never made this more possible than it is today. The availability of cloud infrastructures alone, have had a tremendous impact on this.

Continuous Delivery

This is a pretty controversial subject these days and I believe that is because too many people are concentrating on the  wrong parts of the practice and/or conflating it with Continuous Deployment. Continuous Delivery is, in fact, about reducing risk. The following are the central tenets. If followed properly, you can deliver software changes and bug fixes more quickly and reliably than you would with traditional processes.

  1. Test Driven – All development should be done with a given, pre-determined outcome. You cannot move-on until the tests pass. AND YOU DON’T FIX THE TEST! All too often if a test is not passing a developer will change the test to match the behavior of the code. This should not happen if your test expectations are well understood ahead of time. (Which they should be.)
  2. Automated the build – Every single check-in to source control should trigger a build of the entire systems’ code. If the build does not succeed, further commits should be blocked until the breaking commit is fixed or reverted. And I am a believer in the old adage “If you break the build, you don’t go home until it is resolved.”
  3. Automated testing – In addition, each check-in to should result in a full suite of tests (preferably integration tests) being triggered to guarantee that no changes have been made that effect other parts of the system.
  4. Automated acceptance testing – Once a feature or bug fix is ready for deployment, you can automate the acceptance testing to verify that all pre-determined acceptance criteria has been met.
  5. Automated deployments – There are a number of options to automate deployments such as puppet, chef, fabric, etc. With any of these options it is easy to deploy, verify the deployment, rollback if necessary. This greatly reduces risk in deployments. Particularly when coupled with options such as A/B deploys.
With the above pieces in-place, it opens the door to so many options that will reduce the friction of getting code to production while ensuring greater overall stability. Without the above pieces, I do not recommend trying to go further.
Another point that I made earlier is that people often conflate Continuous Deployment with Continuous Delivery. While very similar and holding many of the same tenets. There are some significant differences that make them fairly unique and, in my mind at least, makes Continuous Deployment only an option for a few organizations such as the UX group at Facebook. The key difference between the two in my mind is that Continuous Delivery focuses on the fact that all code must be deployable at any time. The steps outlined above can help you achieve that. Continuous Deployment, on the other hand, means that every change goes through the entire pipeline and is immediately deployed to production. (See Martin Fowler’s excellent write-up here.)
While Continuous Delivery is not about constantly pushing code changes, it is important to keep your deployments fairly frequent to keep the scope of the changes being deployed small. This is remarkably effective in isolating issues when they do occur. If you wait too long to deploy, you end up with, essentially, a lot of small changes. Then, if an issue arises, you need to decompose the release to attempt to find the problem. If you release one small change at a time, you can clearly see if an issue arises and track it back.

Continuous Delivery processes open-up the floodgates to all sorts of possibilities that will facilitate innovation alongside stability.

A/B Deploys

Today you can deploy changes to a whole new instance, side-by-side with an existing, stable, instance and run a comprehensive suite of tests against this instance. If it proves to be stable, you can then switch off your existing instance and direct traffic to the new instance. If you keep the initial instance available the cutover on the occasion of a failure is quick and can even be automated itself.

Canary Deploys

You can also take this a step further. If you choose to, you can selectively drive live, production traffic to the new instance while the majority of your traffic is still going to the main stable instance.

Interesting video on the above subject from etsy.

What Can Be Done By Us In The Trenches?

If you happen to be one of the many that are stuck in the trenches feeling powerless to affect change in your organization, have faith. In most cases you can still take the steps necessary to prepare for Continuous Delivery. Put in-place all of the pieces and use them while still being required to follow traditional release processes. If you do so, and meticulously track your successes, you can build your argument to management. Nothing speaks better than data. If you can present irrefutable proof that your process works, consistently. Results in fewer bugs. Makes releases quicker and rollbacks less frequent. It will “get through”. And this will build trust which leads management to allow you to take more risks with the knowledge that you have diligently created processes and checkpoints (automated in most cases to reduce friction) that greatly reduce or even eliminate client impact. Small wins lead to great achievements.

In Summary

I do what I do because I truly believe that technology can, and will, make the world a better place. Technologist wield great power. Probably mores today than ever in history. However, there is great responsibility that comes along with that power. So, when somebody asks to take large risks in order to make innovation happen, they must be willing and able to back those requests up with, not only why you think this innovation is warranted (many people want to innovate just for innovations sake) but also, your game plan for taking it to market in a responsible manner.

Looking at the current culture in my organization, I see that a change needs to occur to stop the hemorrhaging of talent. Not-coincidentally, we are losing most of our top talent to companies that are much smaller, mostly start-ups. Smaller companies, in many ways, have much more leeway to fail and recover than large organizations. But the key for large organizations is to realize they can do the same. Just focus on allowing small failures. Learning from them. And moving forward. If we don’t allow this to happen we will not only keep losing top talent, we will fail in our honorable mission. Which will really be a loss for everybody involved.


YADA (Yet Another DevOps Article)

yada-yada-yada-seinfeld-elaine-benes Yet another article about DevOps? When will it ever end? Well, I can say from experience, we are very far from a consensus on what it means. So, it will probably not end anytime soon. Much of that has to do with the fact that it is different for every company. Each company needs to determine it’s own definition. But, in order to reach that definition, you must have a framework to work from. When it works and works well, everybody is happy. When it doesn’t work, it can be very ugly. So, here is my attempt at some guidance as I see it from the trenches. Having been through several permutations of DevOps or DevOps-like migrations, I can just tell you what I have seen work and what hasn’t.

My Basic Tenets of DevOps

DevOps, or whatever you want to call it, is fundamentally a shift toward unifying the process of creating software across business units. To achieve this unification you must, absolutely MUST, have buy-in by all. There is no “throwing over the wall” in DevOps. From the time an idea begins to germinate, there should be representation from all parties. This is the place that my vision differs from many. I believe that more than just development and operations should be involved. I don’t see a world where some representative from product comes-up with an idea and hands it off to development and says “build-it” working in the new world of rapidly changing enduser expectations. I believe there needs to be a constant feedback loop. This is because, in the end, we are all the product owners. And must be aware of what is being delivered and what is being experienced by those that are interacting with what is being delivered.

Don’t get me wrong, I don’t believe the “everybody contributes equally to each part of the process” vision works. Especially once you start bringing-in non-technical representatives. The following are some basic tenets that I believe should be present on any team that wishes to carry-out this vision:

  • Product (or whomever is responsible for the curating of products in your organization) should be in close contact with development, architecture and operations from inception to…
  • Every member of the team should be empowered to say they see something going astray or they have concerns about some decision without any fear of shame or repercussions.
  • The technical members of the team should be capable of doing all critical pieces of the process. This is to facilitate an active self-support model. Not every member needs to know every aspect in depth. Just enough to diagnose a problem. They should then be empowered to take actions on that identified problem. Be it write a patch, test and deploy it. Or just raise it to someone on the team with more in-depth domain knowledge.
  • The team should be empowered to deploy their product to all environments. this needs further details as it is a touchy subject that must be evaluated on a case-by-case basis. But the fundamental concept must exist.
DevOps For Managers
I want to spend a little time on this subject. this can be a real hang-up to the adoption of DevOps in many organizations. But, I honestly believe that with the proper process and measures taken in advance, there should be no fear in giving a team the “keys to the kingdom” so to speak.
So, I do not advocate for most companies (many online start-ups may be exceptions to the rule) that you just allow all developers to check-in code and deploy it directly to production. You often hear developers and the like espousing “Continuous Deployment” and citing “Well, this is how Facebook does it”. [See their paper here] But, first I should say that because Facebook does this it doesn’t make it right for you. In fact, it makes it right for very few companies other than Facebook. Secondly, this is how Facebook deploys it’s front-end. Not the API that their front-end is built upon and that so many others have become dependent on being up. IF a user notices a glitch in the FB UI, they are not likely to squawk too loudly. If a large consumer of the FB auth API suddenly cannot authenticate their users, that will be noticed and may even make the news.
What I do, however, advocate, is empowering teams to do their own deployments. Don’t make this a silted process. This leads to a feeling of responsibility to “own your code”. Developers are much more likely to deliver quality code if they know they are going to be the ones deploying it and, ultimately accountable for any issues it may have. Empower your teams. Treat them like responsible adults that can make sound decisions about the fitness of a product for general release. While there may be some pitfalls along the way, I guarantee you they will return the favor by making you look very good.
DevOps For Team Members (The Flip Side)
So, up to now, I have been speaking generally. Or maybe more specifically to managers or team leaders. Now I want to speak to the teams directly. The implementers of the product. If you want the level of empowerment that DevOps requires, you had better care about what you put out there. There is no half-assing here. If you screw-up, there is no place to point the finger but right back at you. Here is some advice from the trenches to help you achieve that end-goal. Nothing too new. But way too often overlooked or brushed-aside.
  • Test Driven Development is your friend! Embrace it! Learn to love it! I, of all people, know full-well how hard this can be. And, to be completely honest, I still struggle with it to this day. But, I have developed my own rhythm for achieving TDD. It doesn’t fit anybody’s strict definition. But it works for me. Find what works for you. Just make sure you feel confident in them and their ability to flush-out issues.
  • Automate everything you can! Obviously, the tests mentioned above should be run on each check-in. If you broke it, fix it. Don’t ever go home with a broken build. (I know, all the mantras you’ve been hearing for years. But they’ve all become mantras for good reason.) But, go beyond this. Automate your deployments as well. Use tools like fabric, puppet, chef where available. Even better, look into containerizing your apps with a tool like docker. If you get to the point where you can deploy exactly the same code (or even better, the same container!) over-and-over, you will become more confident in your abilities to deploy to any environment any time. Make sure you also automate the rollback of these deployments. If something goes wrong you will be so grateful to have a quick and easy way to get back to the previous known good state. Also, until you are completely comfortable with your automation, practice in a development environment. I would expect, through the normal process of develop->test->develop you will get comfortable. this will also be a work in progress. But you will eventually get to a point where you feel comfortable. Comfortable enough to even deploy continuously. Or, more likely, much more frequently than you do now.
If you take the above steps. You will, with time, get to the point where the leaders in this arena are. And don’t wait for your organization to tell you to. Start now! there is no reason you can’t put everything in place. Get comfortable with it. Then present it to management. You can be the hero. Or, at the very least, you have prepared yourself to work at an organization that “gets it”.
A Benefit To All
The end result of all of this is that we will all benefit from this. Features, fixes and changes will get to production and in-front of clients sooner. this leads to a more immediate feedback loop that results in more changes that ultimately end in a better product.