Sometimes The Best Data Is Not Digital


Photo by Brooke Cagle on Unsplash

This is going to be a short post (for me at least) but I have been inspired today by an inspiring woman who also happens to be my VP. Today I was fortunate enough to be picked to join a lunch and learn with a group of my peers here at Arrow Electronics. You may or may not have been involved in similar events where you work. They are essentially a way for a group of peers who often do not get a chance to take a break in their days to get to know each-other to do do so and have an open roundtable discussion with an executive. I always find these interesting and love to get to know my coworkers. However, all too often, the open roundtable feels more like a political interview with double-talk or softball questions. To her credit, [my boss] didn’t bat an eye at some fairly tough questions. She is the perfect reflection of authentic leadership.

That said, that is not what has really inspired me so much as the message that she communicated throughout all of her answers. “Nothing is more important than getting to know our customers.” And I know, this sounds almost cliche these days. In fact it is written into the Amazon credo. And, if you asked me, I would say that I am obsessively customer focused. Ever since I have been in technology (one of my first jobs was a pre-cursor to today’s User eXperience design) I have tried to work from the customer back to the solution. Over the years I have found how much you can learn by customers through their behavior. Or more accurately, through the data that their behavior produces. But this is not at all what she meant. What she meant was good old fashion, face-to-face, getting to really know your customer. Sit with them and learn their pains.

I’ve drifted from that over the years living on or near the bleeding edge of technology. And I don’t believe I am alone in that. My field of technology has grown by such tremendous leaps and bounds over the 20-odd years that I have been in it. However, I think many of us have grown too comfortable in our tech cocoons thinking that we know our customers because we are gathering so much data about them. (Most of us are not collecting anything but anonymous clickstreams to be clear.) But there is no substitute to how we did it when I began. Sure we can collect so much more data these days and do more now with the data we collect. But that is sterile compared to the human factor that the data does not show.

So I challenge us all to go spend even 30 minutes sitting with your customers. See how they interact with your product. Truly listen to everything they have to tell you about it. Good or bad. No matter how great your data may be (I’ve seen an awful lot of data, so be honest. 😉), you can learn more in 30 minutes looking through your customer’s eyes.


Demonizing DaVita

It’s all across the news here in Denver. John Oliver has taken aim at healthcare giant DaVita and their CEO Kent Thiry. And he makes an apparently strong case. However, it’s not as well informed as it may seem on the surface. I would even be willing to risk going out on a limb to say that the entire piece was built around an opinion and fueled by confirmation bias.

Now, don’t get me wrong. I do believe DaVita has had plenty of issues. Issues that every other large company has. Bad seeds that make bad decisions often driven by their desire to succeed or make more money. Should healthcare providers be held to a higher standard? Yes, they likely should be. Does that mean that the entire company or even their head executive be dragged through the mud as the result of others wrongdoings? No more than any other company.

Ultimately, what is truly brought to light here is a system that is not, as in Canada (or Oliver’s home country) a public entity. The US healthcare system is a jigsaw puzzle of mostly private providers that are no different than any other company in their need to generate revenue in order to pay their employees and provide services to their clients. The only difference for Davita is that their clients are people with serious health conditions that require their services to stay alive. While this is no doubt a significant differentiator, it doesn’t change the fact that Davita is a business. Which is why Thiry speaks about it in similar terms as a company like Taco Bell. Not because, as you insinuate, dialysis is a comparable product to a taco.

So, you may ask why I would care enough to write a post on this matter? Especially taking a potentially controversial or unpopular stand. Well, as a two-time kidney transplant recipients that has spent many years on dialysis across my lifetime thus far, much of that atDaVita clinics, I have a vested interest. I have had the pleasure of knowing manyDaVita employees at all levels throughout my life and can tell you unequivocally, they don’t see DaVita as “just another business”. They take pride in the mission ofDaVita. They believe that, regardless of what they may be doing, from the Patient Care Technicians on the frontline, to the software engineers that build support systems, on to the highest level of the executive offices, they are contributing to making people’s lives better. I’m sure that is not the case of everyone but it is of everyone I have known. And it is shown on so many levels.DaVita teammates participate in dialysis and donor causes nationwide. And while I don’t have numbers, my gut tells me there are moreDaVita participants at events such as the Donor Dash and others fromDaVita than any other employer. Trying to insinuate thatDaVita wants to keep people on dialysis is just inflammatory garbage on par with the political statements flying around that Mr. Oliver takes aim at so regularly.

End-stage Renal Disease (ESRD) patients are educated by the doctor’s that decide they are in need of dialysis about transplantation. If they choose to pursue this, they are then evaluated for suitability of transplantation and, if they are deemed suitable, they are placed on the list to wait, many years on average, for a transplant. If they are not, they are told why and in most cases given a plan to help them get on a path to transplantation. Within the Davita clinics I have treated the staff will regularly revisit these plans and continue to educate the patients on their options. It is absurd to me to think of any dialysis employee for Davita or otherwise knowingly keeping the option of dialysis from a patient.

There were way too many accusations either said or implied in his diatribe to address here. So John, if you would like to learn more about dialysis, the admittedly broken system, or Taco Bell, feel free to ring me up.

All of this said, I do believe that comedians and the media in general has the right to make commentaries as Mr. Oliver did and actually should do so. We the consumers just need to be smart enough to weight the entire story regardless of the source.

I’m angry because I care! Why?

Angry cat
So, this is going to be a bit of a divergence from my normal topic. Going into a bit of my personal life. I have a son. To be completely corny, he is the light of my life. (No offense honey.) There are many things that make him special to me but the one thing that really stands-out is that he is so loving and caring. He really seems to notice the wrongs in the world and it seems to really resonate with him. (I’m very possibly over-inflating my image of him. But, if you have kids, you get it.) I have seen him on the verge of tears when watching a nature show that, just briefly, covers the topic of poachers and he asks his mom and I what poachers are, why they do what they do and why people don’t stop them. Anyway, enough of my parental glowing. Onto the what I really want to discuss.

Over the past few months we have had several events happen that have been disappointing to me regarding my son at school. He had less-than-stellar reports from is teacher. He has since improved significantly because we cracked-down a bit on him. But, then I have a discussion with him and he tells me he didn’t tell me about a homework assignment because, in his words “I wanted to play instead.” Well, that set me off. And the following is my speech, slightly edited, to him…

You need to get serious about your schoolwork and how you act in school. I realize you don’t see it now, but your actions today will impact your entire future. You talk to me all the time about how sad it makes you to see things like animals being hurt for bad reasons or see someone that doesn’t have a home or food. You can change those things! Doing well at school is the how you prepare yourself to be able to make that change!

His response? A blank stare. Did I mention my son is only 5? Yeah. Chill-out right? So, I spent the next 10 minutes explaining to him what I meant and why it’s so important to me. Then I promised him a donut and all was good with the world again.

So, what did I say to him? The following is, more-or-less, a grown-up adaptation of my speech.

I truly, deeply believe getting a good education is the best solution we have to solving all the worlds problems that make you sad. You have the benefit of living in a good, safe neighborhood and go to a school that your mom and I hand-picked due to its reputation as one of the best in our area. Most children in the world do not have anywhere near this opportunity. Don’t waste what you have.

Why is it so important to me?

I realize a lot of people have this view. And a lot do not share this view. Neither is wrong. Certainly not at the moment. Nobody has been able to execute on this view well-enough, or at a large enough scale to make a huge difference. But, even the smallest impact, ultimately makes a difference. And I, at least, believe that when we get to the point where every single person in this world, old or young, rich or poor, has access to a top-notch education, the world will unleash such a tremendous untapped resource, it will be like the unleashing of the energy contained in an atom. and from that will begin to flow ideas never before conceived of. Ideas that can truly change the world. End world hunger. End war. End the bigotry and narrow-mindedness that plagues our world.

How do we get there?

I don’t have the answer to this question. Nobody does. Yet. But, many people have dedicated their lives to figuring out the answer to this question. I’m one of those people. I am in technology so that is the angle of attack I take. But everyone has their own approach and the ultimate answer will likely involve a mix of many or all of them.

But, it all starts by getting people to care. And, if I have one goal in my life above all else, it is to get every person I meet, to at least understand why I care. If they choose to care and believe what I do is completely their choice. If I can in my lifetime, even get one person to believe and start down the path of contributing, I will be happy.

A couple disclaimers

There are many questions, comments, complaints I hear whenever I start talking about this. I understand them all and, honestly, do not have answers for them all. But, I do to a few.

  1. No, I do not believe that everyone should work to improve education. My point is that I believe that educating those that do not currently have access to a quality education, is something that can make a difference in all areas. So, to that point, I believe there are many areas that need improvement and need people to care about them. Find something that resonates with you. In fact, I have many causes that I care about and contribute to when I can. If we all just care and make an effort to improve the world, it will make a difference.
  2. Do I really believe education can solve all of the worlds problems?  I’m an optimist. So yes, I do. But to qualify, solving the worlds problems in my view may not be the same as yours. For instance, I do not believe, unfortunately, that an education can stop people from seeing the color of someone’s skin or the accent in their voice. However, I do believe that it can be an equalizer. So, as a result, we will ultimately begin seeing our classrooms and our boardrooms begin to reflect the true make-up of our country and out world. And, I believe, the more this happens, the more everyone is exposed to other races and cultures on a day-to-day basis, the more people will learn that, while we have differences, those differences are what make us strong. And, ultimately, no matter your race, religion or culture, we all want the same thing. The basest of all human needs is safety. So, if we all realize that, why wouldn’t we attempt to solve our differences peacefully?

I realize I may come-off as some kook idealist. Anyone who knows me knows I’m not crazy. Not really. Just crazily optimistic.

Ok, now I need to go get a donut with my son…


Languages to learn in 2016


This is a blog post in response to a quora question. I started a response and it grew well beyond what I thought it would. I guess that means I have an opinion here? 😉

Question: “Which backend programming language should I learn in 2016?”


The answer depends on your objectives. If you are looking for something to base your career on, you should go for one of the more popular, multi-purpose, OO languages:

  • Java – This would be my top choice for this realm. It has the largest community and the most job opportunities. And, if you want to work in cutting-edge OSS, it is the most common language (or at least the JVM) of choice.
  • C# – This, like java, is a great choice for employability and stability. If you are most comfortable in windows (as opposed to linux or OS X) you may want to start here. The jobs you will find on the market will be more leaning toward the enterprise than commercial. But that is still a very interesting world and one that will likely be around and in great demand for the foreseeable future. And the recent foray into open-source and cross-platform viability could open it to a new world of applications.

If you are looking to add to a toolbelt that already has a foundation in something like the above and will still add value to your career, I would say to add a growing or established language that adds a particular value.

  • go – This is, in my humble opinion, one of the key languages of the next 10 years. Partially because google is behind it and what google does, so do any companies that want to attempt to recreate their success. But, it is genuinely a great language. In my opinion, it is like C re-imagined. It is crazy-fast and cross-platform compilable. It has its warts like any language and is still developing a bit. But what is there is very good and well thought-out. And, you are able to go to your next meetup and say “Yeah, I know go.” 😉
  • node.js – Notice I didn’t say javascript? Yeah, that is because node, while the sysntax is definitely js, is a whole new paradigm. Even for experience js developers. But, due to its async nature, it fits the new world of small, light service-based architectures very well. I, personally, see it an excellent choice for microservices. (as is go) And it is being used by many large companies for just this reason.

Now, if you just want something cool that will teach you something new you have a lot of options. I am a huge proponent of learning a new language just to see a problem from a whole different angle. While there are many options in this world, I will mention a few I have played with.

  • Elixir – A ruby-ish language that is built on-top of the erlang vm (BEAM). Elixir hides much of what makes erlang difficult for people to tackle while still leveraging the immense goodness of the BEAM. I’m an erlang guy and would love to propose that everyone learn erlang. But, I’m also a pragmatist. It will be much easier for someone to tackle elixir. Have fun doing so. And still be able to built ridiculously parallel systems easily.
  • Haskell – Basically, you can choose any functional language here you like. I feel that Haskell is probably the best combination of pure functional ideologies (no side-effect methods, immutability, etc.) and a pragmatic, practical, implementation. Plus, I feel that every professional engineer should experience fp at some point in their career. Being a 15 year development veteran, I always thought I understood parallel and concurrent programming (simplistically think multi-threading). But then I began to learn erlang and a whole new world was presented to me of how systems “should” be built. And, if all the descriptions of fp describing it in terms of maths discourage you, I would suggest this post about just that.
  • Rust – Coming out of mozilla, rust is similar in many ways to go. It has an excellent memory management model/implementation. I could see rust being to go what c# is to java. Either way, could be worth a look.
  • Julia – Kind of going out on a limb with this one. Especially since I have done little more than ‘Hello World’ in julia. But there are many things to like about this language and its fresh take on many different approaches that have become de rigueur in software engineering these days. (Plus it has a repl!) If nothing else, it could be worth exploring just to force your head out of the standard box.

I hope this helps. I realize I have left out many languages that may be just as worthy. But I had to limit it somehow and I did so by only listing those I have, at least, a modicum of experience with.


The Cloud Game Is For Grown-Ups

tantrumListening to a recent podcast on Software Engineering Radio featuring Adrian Cockroft speaking about the modern cloud-based platform, I was left bursting with thoughts. He covered a lot of topics which I hope to cover in the near future. However, one in particular hit me like a ton of bricks. He mentioned that he has seen many retailers and manufacturers of goods choosing to build their own private clouds rather than trust Amazon, someone they see as a rival, to provide their cloud infrastructure. To this, Adrian had many great points. I’d like to outline some of them as best that I can and add some of my own thoughts. (All of these are in my own words inspired by Adrian’s thoughts.) I’d highly recommend you ceck-out the podcast yourself. I’ll provide a link at the end.

Don’t Be A Child

By this I mean, by seeing Amazon’s online retail business as a competitor and not giving them your cloud-hosting business is analogous to a child saying they won’t let a friend help them with your math homework because they beat them at ball earlier. The only person you will be hurting is yourself.

Amazon has already figured-out all of the ins-and-outs of building, maintaining and scaling a cloud infrastructure. For you to do the same, without a really, really good reason (I acknowledge that there are cases that warrant private clouds. But, I find them to be few and far-between.) is a fool’s errand. Even with the most skilled, most experienced (read: highly paid) staff, you will have many months if not years of time lost getting your private cloud to the state of reliability, scalability and usability of Amazon’s.

Leverage Your Competitor’s Strength

By choosing Amazon to host your cloud infrastructure, you are starting two steps ahead. You don’t have to worry about building and maintaining this infrastructure. While, to do it properly, you will need to build or buy some additional automation and monitoring. But the vast bulk of the complications of building and maintaining a cloud infrastructure will be someone else’s problem. this will free you to focus on your core competencies. If it happens to be software, it allows your teams to focus on building new features, rapidly. And this, ultimately, is the greatest competitive advantage to the cloud.

By embracing Amazon’s strength as the best (I know, arguably) cloud platform provider, you are turning their strength against them. The fact that you are free to concentrate on making better products, whatever they may be, you are allowing your competitor to help you. Bringing back the childhood analogy it would be like letting your friend that is the superior basketball player teach you to be better at math. Then you take the time saved on your math homework to get better at basketball. (Ok, a stretch of an analogy but, I hope you follow me.)

In short, get your head straight

Seriously, if you want to compete in today’s ultra-fast-moving world you need to make smart decisions. You need to take every single advantage you can get. Allowing somebody else to take-off such a tremendous burden as providing your cloud infrastructure is a gift. You can’t let base such decisions on emotions. As Adrian Cockroft poited-out, Netflix still uses Amazon to host it’s entire streaming infrastructure despite the fact that Amazon is directly competing with them for the same business through it’s Prime video offerings. And beating them handily in the process.

I am however, not naive to the fact that there are still many good reasons to build a maintain a private cloud. Ultimately, there are some applications that likely wouldn’t benefit from a cloud infrastructure at all. But, you should think long and hard about your reasoning before deciding to build your own cloud. Think inwardly to make sure there are not personal feelings influencing your decision-making. As I mentioned, there are many reasons for building your own cloud (storing critically sensitive data, localization laws, etc.) but, feeding the competitor is not a good one.

Pertinent Links

Adrian Cockcroft on the Modern Cloud-based Platform

Adrian Cockroft’s Blog

Software Engineering Radio

Is Reactive Programming more than just hype?

In short? Yes. Class over. Thanks for coming…

No? Ok, I suppose maybe I should explain why I feel this way. In this brief article I will attempt to explain reactive programming from a high level, ivory tower, standpoint. I’m not planning on getting into implementation details. there are many, many ways to go about it and I would not begin to presume I understand all of my readers requirements are. But I hope that by the end of this you will at least be curious enough to follow my resource links to learn more and maybe even do your own research to see how you might apply these principles in your own environment.

What is reactive programming?

Reactive programming (RP) is a set of programming patterns and techniques that were designed to handle the new landscape of computing. Some examples of what I mean by “new landscape”:

Changing requirements (Chart courtesy of Martin Odersky)

Requirement ~10 years ago Today (2014/2015)
Server nodes 10’s 1000’s
Response times seconds milliseconds
Acceptable downtime hours 0
Data volume GBs TBs -> PBs

These new requirements are not simply incremental changes. They represent a complete paradigm shift. To me, the key points here are “Response times” and “Acceptable downtime”. These drive everything else. The increase in data is a direct response to customers wanting better experiences. However, it is also due to much-more finegrained monitoring and statistics. Regardless, the above is a reality. And, if you aren’t dealing with this now, it is guaranteed you will be.

The new architectures that began to evolve from these new requirements needed to have the following characteristics:

  • Event -driven – produce, propagate, consume and react to messages.
  • Scalable – The ability to react to changing amount of load. Preferably without downtime.
  • Resilient – Can react to failures without impacting client experience.
  • Responsive – Will react to users promptly. (a.k.a. no progress bars!)

All of these characteristics must be addressed to present a truly reactive architecture. In reality, I believe that most implementations will represent a subset of the above. However, these subsets, while functionally achieving the end-goal of RP, would not be RP Certified (if such a thing really existed.) Whether this matter to you is along the same lines as the debates amongst REST practitioners. I’m not going to step into that ring.

Event-Driven Architecture

One common approach to meeting these requirements is an event-driven architecture (EDA). If you are not familiar with the concept, I suggest checking-out one of the resource links at the bottom. However, in short, EDA is an architecture where the system is designed to react to events sent from other parts of the system, asynchronously and without blocking. An event, strictly speaking, is a notification that some state has changed in some part of the system. For example, if you are creating a system for a library and someone decides to check-out a book, an event will be ‘raised’ that any part of the system may choose to consume. In this scenario, one system that would absolutely be interested would be the one responsible for keeping-track of the libraries inventory. When it receives the checked-out event, it will change the inventory to reflect that the book is no longer available to be borrowed. There would likely be many more systems interested in this event as well.

By providing an event-driven architecture in this manner, you allow for a system comprised of highly-decoupled, composable parts. In practice, these parts are typically services, preferably lightweight with a single responsibility. (Often referred to as microservices these days.)

So, how does this all fit into creating a “reactive” system that is scalable, resilient, and responsive? Well, let me address each of these individually.


Event-driven architectures (EDAs) provide scalability via their highly-decoupled nature. By decomposing a system into well-defined pieces with clear process boundaries, you can scale a systems components independently of each other. So, for example, given the library example, you may find a bottleneck existing at the point the books are checked-out just prior to the final paper due dates at the local university. (Not that I would have ever been one of these students! 😉 ) In anticipation of this increase, you could allocate additional nodes that simply emit the checked-out messages. (I realize that you may, in-turn, need to scale-out the consumers of the check-out message.)

Because of the level of decoupling that EDA provides, the location of producers or consumers of events is completely irrelevant. This allows for additional scale-out opportunities such as cross-dc (for self-hosted solutions) or cross-az/region (for cloud-hosted solutions) scaling. This leads to many different options that reinforce, not only scalability, but resiliency and responsiveness as well. We will discuss these independently.


The scalability options described above provide a significant start on the path to resiliency. By being able to scale-out in this manner you are making your system more resilient to individual node failures, network failures and even the rare az/region outage.

However, the resiliency of an FP architecture, when designed properly, goes well beyond this. One foundational aspect is to build-in the ability to supervise processes, detect their failure and restart them. In order to do this properly, you need to understand the actor model and supervision trees.


Courtesy Erlang Programming (Cesarini, Thompson 2009)

Thinking in actors and supervisors will be a bit a mental shift for most engineers and architects familiar with traditional OO models. However, spending the time to comprehend them will pay-off in spades. they are incredibly valuable in building resilient and scalable systems. I won’t go into much detail on this subject. However, at a high level, supervision trees allow you to detect failures in workers and restart them if they fail or block unpredictably. In addition, actors facilitate concurrency by encapsulating small units of logic without any shared state (preferably).

This model also has the advantage of allowing scaling and resiliency across service boundaries. Meaning that if one service goes down, the entire service will not. As an example, if twitter loses the system that makes suggestions for similar feeds, twitter will stay-up while that piece is repaired. From a client perspective this can make the difference between a pleasurable experience and one that makes you leave for good.


Reactive systems are kept responsive by their inherently asynchronous nature. Rather than firing a request and waiting for a response for a server, you make a request (or many) and continue on your merry way. As responses are received you process them in-line and, if a UI is part of the mix, you update it accordingly. An example of this would be when you view a twitter feed and posts may not all show-up at once. In fact, they often do not appear in the order they were created due to the fact that twitter shards their tweets. This means that the tweets you are reading may be getting retrieved from databases distributed across the globe.

By adding EDA into the mix, you have the ability to have state events occurring update a system dynamically. So, using the twitter example again, as a tweet is created by someone you follow, it will be propagated to the feed that you are viewing without you needing to request the page again.

In addition, as mentioned previously, the ability to scale-out across regions promotes the ability to locate services in a geographically dispersed manner. By doing so, you can implement latency-based routing to route requests to the nearest available node.

So… Why All the hubbub?

Well, if you haven’t determined it thus far, RP is a natural reaction to the world of cloud computing. Cloud computing requires a brand new way fo thinking of systems architecture. While it obviously provides a wealth of opportunities that were never available prior to its existence, it also introduces an equal amount of new challenges. Reactive Programming architectures address many of those challenges in a new, elegant manner.

RP is easiest to build into an architecture from the beginning. However, I can say from experience, it is not all that difficult to retroactively adapt an existing system to, at least functionally, fit into the reactive paradigm. Either way, RP is becoming a rage because these problems are real. It is a necessity for meeting customer expectations in the modern world of computing to solve these problems. If you are not currently thinking reactively or working toward solving these problems in some other way, you should be. RP is a good, quickly maturing, model to follow to that end.


Event-Driven Architecture

Using Events in Highly Distributed Architectures

EAI Patterns (Greg Hohpe)


Nice (simple) Breakdown of to Scale

Reactive Programming

The Seminal Work

Reactive Manifesto

What Does Reactive Mean? (Erik Meijer)


Excellent Coursera Course (Scala based)

Reactive Programming in the Netflix API with RxJava

Push the boundaries! Responsibly.

Recently, I had a colleague of mine say that they had taken “I took a lower overall pay scale to come to <my company> simply because I wanted to be part of something that matters.” This was very refreshing to hear as my company is going through a phase of great attrition due to many factors, not the least of which is a tight-fisted lock on what can be developed and deployed. Many people have become disillusioned with executive leadership decisions because they appear to be blocking progress to delivering real value to ensure stability.

This has been sad to see as I joined my current company for the exact same reason and with the exact same sacrifice in compensation in order to make change that I believe this company is fantastically positioned to affect. Many of the individuals that have been the most influential in making real, substantive, progress toward our end-goal, have left for other opportunities over the past 6-12 months as a result. Most, to other companies in the same or related industries that they believe have a clear, concise vision and the will and motivation to take risks in order to push forward.

Where Do I Stand?

My feeling, while it may seem like a cheat, is somewhere on the fence. I do believe you must have stability first in order to make clients confident in your product. Confidence drives adoption and adoption can drive the change that you are looking to create. However, in order to succeed in making change, you must stretch the boundaries of the known and take risks. This is the only true path to innovation. It’s impossible to innovate while staying in your comfort zone. In my experience, true innovation happens when you have one foot in the known and the other in the unknown.

Stability and Innovation Do Not Need to Be Mutually Exclusive

So, how can you push your boundaries while also maintaining a stable product that customers will want to use and recommend? Well, current technologies have never made this more possible than it is today. The availability of cloud infrastructures alone, have had a tremendous impact on this.

Continuous Delivery

This is a pretty controversial subject these days and I believe that is because too many people are concentrating on the  wrong parts of the practice and/or conflating it with Continuous Deployment. Continuous Delivery is, in fact, about reducing risk. The following are the central tenets. If followed properly, you can deliver software changes and bug fixes more quickly and reliably than you would with traditional processes.

  1. Test Driven – All development should be done with a given, pre-determined outcome. You cannot move-on until the tests pass. AND YOU DON’T FIX THE TEST! All too often if a test is not passing a developer will change the test to match the behavior of the code. This should not happen if your test expectations are well understood ahead of time. (Which they should be.)
  2. Automated the build – Every single check-in to source control should trigger a build of the entire systems’ code. If the build does not succeed, further commits should be blocked until the breaking commit is fixed or reverted. And I am a believer in the old adage “If you break the build, you don’t go home until it is resolved.”
  3. Automated testing – In addition, each check-in to should result in a full suite of tests (preferably integration tests) being triggered to guarantee that no changes have been made that effect other parts of the system.
  4. Automated acceptance testing – Once a feature or bug fix is ready for deployment, you can automate the acceptance testing to verify that all pre-determined acceptance criteria has been met.
  5. Automated deployments – There are a number of options to automate deployments such as puppet, chef, fabric, etc. With any of these options it is easy to deploy, verify the deployment, rollback if necessary. This greatly reduces risk in deployments. Particularly when coupled with options such as A/B deploys.
With the above pieces in-place, it opens the door to so many options that will reduce the friction of getting code to production while ensuring greater overall stability. Without the above pieces, I do not recommend trying to go further.
Another point that I made earlier is that people often conflate Continuous Deployment with Continuous Delivery. While very similar and holding many of the same tenets. There are some significant differences that make them fairly unique and, in my mind at least, makes Continuous Deployment only an option for a few organizations such as the UX group at Facebook. The key difference between the two in my mind is that Continuous Delivery focuses on the fact that all code must be deployable at any time. The steps outlined above can help you achieve that. Continuous Deployment, on the other hand, means that every change goes through the entire pipeline and is immediately deployed to production. (See Martin Fowler’s excellent write-up here.)
While Continuous Delivery is not about constantly pushing code changes, it is important to keep your deployments fairly frequent to keep the scope of the changes being deployed small. This is remarkably effective in isolating issues when they do occur. If you wait too long to deploy, you end up with, essentially, a lot of small changes. Then, if an issue arises, you need to decompose the release to attempt to find the problem. If you release one small change at a time, you can clearly see if an issue arises and track it back.

Continuous Delivery processes open-up the floodgates to all sorts of possibilities that will facilitate innovation alongside stability.

A/B Deploys

Today you can deploy changes to a whole new instance, side-by-side with an existing, stable, instance and run a comprehensive suite of tests against this instance. If it proves to be stable, you can then switch off your existing instance and direct traffic to the new instance. If you keep the initial instance available the cutover on the occasion of a failure is quick and can even be automated itself.

Canary Deploys

You can also take this a step further. If you choose to, you can selectively drive live, production traffic to the new instance while the majority of your traffic is still going to the main stable instance.

Interesting video on the above subject from etsy.

What Can Be Done By Us In The Trenches?

If you happen to be one of the many that are stuck in the trenches feeling powerless to affect change in your organization, have faith. In most cases you can still take the steps necessary to prepare for Continuous Delivery. Put in-place all of the pieces and use them while still being required to follow traditional release processes. If you do so, and meticulously track your successes, you can build your argument to management. Nothing speaks better than data. If you can present irrefutable proof that your process works, consistently. Results in fewer bugs. Makes releases quicker and rollbacks less frequent. It will “get through”. And this will build trust which leads management to allow you to take more risks with the knowledge that you have diligently created processes and checkpoints (automated in most cases to reduce friction) that greatly reduce or even eliminate client impact. Small wins lead to great achievements.

In Summary

I do what I do because I truly believe that technology can, and will, make the world a better place. Technologist wield great power. Probably mores today than ever in history. However, there is great responsibility that comes along with that power. So, when somebody asks to take large risks in order to make innovation happen, they must be willing and able to back those requests up with, not only why you think this innovation is warranted (many people want to innovate just for innovations sake) but also, your game plan for taking it to market in a responsible manner.

Looking at the current culture in my organization, I see that a change needs to occur to stop the hemorrhaging of talent. Not-coincidentally, we are losing most of our top talent to companies that are much smaller, mostly start-ups. Smaller companies, in many ways, have much more leeway to fail and recover than large organizations. But the key for large organizations is to realize they can do the same. Just focus on allowing small failures. Learning from them. And moving forward. If we don’t allow this to happen we will not only keep losing top talent, we will fail in our honorable mission. Which will really be a loss for everybody involved.

On Craftsmanship: In Code and Wood

table One of my favorite hobbies over the years has been woodworking. (Admittedly, not one I have done much more than dabble in to my disappointment) I have always enjoyed taking bits and pieces of raw material and working them into something both useful and beautiful. Starting from a plan. Often, in my case being an amateur, the plans are very structured and concrete like from a set of drawings purchased at a woodworking store. When I have had the privilege of working with more experienced and even expert woodworkers, these plans are more often rough drawings that create a framework to work from and evolve as the piece is being crafted. This is a beautiful blend of precision craftsmanship and artistry.

Creating in this way has always appealed to me. And I think that may be why I have been drawn to software engineering and architecture. Most people in this field for any length of time will understand why I make this comparison. But, it has only recently come to mind how deeply this comparison goes.
The Evolution of a Craftsman
In woodworking you always start with a plan. (At least for anything more than a small trinket.) When you first start in woodworking, you are best served getting highly detailed plans with detailed measurements and step-by-step instructions to guide you through the entire process. These plans help you not only create a better end product, but help you understand the how’s and why’s and build your skills by following time-tested procedures. As you learn these procedures and gain confidence in your abilities, you can begin using less detailed plans or skipping-over pieces of the plans you are familiar with. Slowly building to a point where you can soft of freestyle. Build on the knowledge and skills you have acquired from others and interpret in new and interesting ways.

This is exactly how I see the process of building software. Beginners should take advantage of architectural plans, best practices and well-established design patterns to build-up a plan of how they are going to build their software. As they are building the software they can lean-on more experienced developers, books or, maybe most commonly today, the web, to learn techniques for implementing these plans. As they gain confidence in their skills, they will begin to branch-out and find new ways to solve problems. Eventually finding solutions to new problems that have never been thought of before.

As you continue in your craft you begin finding ways to produce better products faster. An example in woodworking might be when you build a custom jig for some process that you need to do repeatedly in a very consistent precise manner. As a software craftsman you would likely create a common library or script to do the same.
Becoming an Expert
While I don’t believe I am or ever will be an done improving my craftsmanship at woodworking or coding, I do believe that I have reached what most would consider advanced or maybe even expert level as a coder. (Certainly not a woodworker!) I big part of becoming an expert at any craft is getting to the point where the questions change. You get past the questions about the basic mechanics of how to solve a specific problem and begin asking bigger questions. You begin thinking more systemically asking different, more holistic, questions. How do the various possible solutions to this problem impact the system as a whole? How can my approach to this one piece lead to a better overall solution?

From observing experts in other fields such as woodworking, I believe this is true for all crafts. A woodworker may look at a table they are building and ask questions like “What would it mean if I used biscuit joints vs dowels?” This is no different in my mind than when a software architect or engineer may ask “How would the system be effected if I choose a distributed key/value store here instead of a centralized relational data store?”
You Will Never Be Done
One of the great things about being a craftsman in any field is that you will never be done. There are always new techniques being developed, new tools to learn and new problems to solve. And you can never stop trying to get better. Even the same old repetitive tasks should be re-evaluated constantly for ways to do them better or faster. I will never be satisfied that I have found the one true best way to solve a problem. This is a beautiful, never-ending process. It keeps me going and builds my hunger to take my craft further.

YADA (Yet Another DevOps Article)

yada-yada-yada-seinfeld-elaine-benes Yet another article about DevOps? When will it ever end? Well, I can say from experience, we are very far from a consensus on what it means. So, it will probably not end anytime soon. Much of that has to do with the fact that it is different for every company. Each company needs to determine it’s own definition. But, in order to reach that definition, you must have a framework to work from. When it works and works well, everybody is happy. When it doesn’t work, it can be very ugly. So, here is my attempt at some guidance as I see it from the trenches. Having been through several permutations of DevOps or DevOps-like migrations, I can just tell you what I have seen work and what hasn’t.

My Basic Tenets of DevOps

DevOps, or whatever you want to call it, is fundamentally a shift toward unifying the process of creating software across business units. To achieve this unification you must, absolutely MUST, have buy-in by all. There is no “throwing over the wall” in DevOps. From the time an idea begins to germinate, there should be representation from all parties. This is the place that my vision differs from many. I believe that more than just development and operations should be involved. I don’t see a world where some representative from product comes-up with an idea and hands it off to development and says “build-it” working in the new world of rapidly changing enduser expectations. I believe there needs to be a constant feedback loop. This is because, in the end, we are all the product owners. And must be aware of what is being delivered and what is being experienced by those that are interacting with what is being delivered.

Don’t get me wrong, I don’t believe the “everybody contributes equally to each part of the process” vision works. Especially once you start bringing-in non-technical representatives. The following are some basic tenets that I believe should be present on any team that wishes to carry-out this vision:

  • Product (or whomever is responsible for the curating of products in your organization) should be in close contact with development, architecture and operations from inception to…
  • Every member of the team should be empowered to say they see something going astray or they have concerns about some decision without any fear of shame or repercussions.
  • The technical members of the team should be capable of doing all critical pieces of the process. This is to facilitate an active self-support model. Not every member needs to know every aspect in depth. Just enough to diagnose a problem. They should then be empowered to take actions on that identified problem. Be it write a patch, test and deploy it. Or just raise it to someone on the team with more in-depth domain knowledge.
  • The team should be empowered to deploy their product to all environments. this needs further details as it is a touchy subject that must be evaluated on a case-by-case basis. But the fundamental concept must exist.
DevOps For Managers
I want to spend a little time on this subject. this can be a real hang-up to the adoption of DevOps in many organizations. But, I honestly believe that with the proper process and measures taken in advance, there should be no fear in giving a team the “keys to the kingdom” so to speak.
So, I do not advocate for most companies (many online start-ups may be exceptions to the rule) that you just allow all developers to check-in code and deploy it directly to production. You often hear developers and the like espousing “Continuous Deployment” and citing “Well, this is how Facebook does it”. [See their paper here] But, first I should say that because Facebook does this it doesn’t make it right for you. In fact, it makes it right for very few companies other than Facebook. Secondly, this is how Facebook deploys it’s front-end. Not the API that their front-end is built upon and that so many others have become dependent on being up. IF a user notices a glitch in the FB UI, they are not likely to squawk too loudly. If a large consumer of the FB auth API suddenly cannot authenticate their users, that will be noticed and may even make the news.
What I do, however, advocate, is empowering teams to do their own deployments. Don’t make this a silted process. This leads to a feeling of responsibility to “own your code”. Developers are much more likely to deliver quality code if they know they are going to be the ones deploying it and, ultimately accountable for any issues it may have. Empower your teams. Treat them like responsible adults that can make sound decisions about the fitness of a product for general release. While there may be some pitfalls along the way, I guarantee you they will return the favor by making you look very good.
DevOps For Team Members (The Flip Side)
So, up to now, I have been speaking generally. Or maybe more specifically to managers or team leaders. Now I want to speak to the teams directly. The implementers of the product. If you want the level of empowerment that DevOps requires, you had better care about what you put out there. There is no half-assing here. If you screw-up, there is no place to point the finger but right back at you. Here is some advice from the trenches to help you achieve that end-goal. Nothing too new. But way too often overlooked or brushed-aside.
  • Test Driven Development is your friend! Embrace it! Learn to love it! I, of all people, know full-well how hard this can be. And, to be completely honest, I still struggle with it to this day. But, I have developed my own rhythm for achieving TDD. It doesn’t fit anybody’s strict definition. But it works for me. Find what works for you. Just make sure you feel confident in them and their ability to flush-out issues.
  • Automate everything you can! Obviously, the tests mentioned above should be run on each check-in. If you broke it, fix it. Don’t ever go home with a broken build. (I know, all the mantras you’ve been hearing for years. But they’ve all become mantras for good reason.) But, go beyond this. Automate your deployments as well. Use tools like fabric, puppet, chef where available. Even better, look into containerizing your apps with a tool like docker. If you get to the point where you can deploy exactly the same code (or even better, the same container!) over-and-over, you will become more confident in your abilities to deploy to any environment any time. Make sure you also automate the rollback of these deployments. If something goes wrong you will be so grateful to have a quick and easy way to get back to the previous known good state. Also, until you are completely comfortable with your automation, practice in a development environment. I would expect, through the normal process of develop->test->develop you will get comfortable. this will also be a work in progress. But you will eventually get to a point where you feel comfortable. Comfortable enough to even deploy continuously. Or, more likely, much more frequently than you do now.
If you take the above steps. You will, with time, get to the point where the leaders in this arena are. And don’t wait for your organization to tell you to. Start now! there is no reason you can’t put everything in place. Get comfortable with it. Then present it to management. You can be the hero. Or, at the very least, you have prepared yourself to work at an organization that “gets it”.
A Benefit To All
The end result of all of this is that we will all benefit from this. Features, fixes and changes will get to production and in-front of clients sooner. this leads to a more immediate feedback loop that results in more changes that ultimately end in a better product.

Why am I 4everinbeta?

The_Leaning_Tower_of_Pisa_Small 4everinbeta is a nom de plume for me to express my thoughts on various subjects. Mostly technical, but not always. The title explains my philosophy of life in general. I will never be done. I’m constantly tweaking. Adding new features. Removing others that are no-longer useful. And, just like any beta product, I am chock-full of defects. This is reinforced everyday with my interaction with others. (My wife and child inform me of new defects everyday!) It is the other individuals in my life and my interactions with them both personally and virtually that uncover these defects. It’s my responsibility to track these defects and strive to fix them.

In my personal bug-tracking system I have several classifications as outlined here:

  • Sev1 – A complete show-stopper. If I encounter some aspect of my personality or behavior that intentionally or inadvertently results in physical, mental, emotional or other harm to another, I must take immediate action to resolve this. This is inexcusable.
  • Sev2 – If something that I do prevents myself or another from making progress toward a goal or aspiration, I must take corrective action as soon as possible. Blocking an individuals progress in life happens more than you may realize. When I realize I have perpetrated such an act, I take it upon myself to not only prevent it from happening again, but attempt to rectify any harm it may have already done. (Although this is often not possible)
  • Sev3 – This is some defect that may slow progress toward a goal or aspiration but does not cause that progress to come to a full-stop. Get to it as soon as you can reasonably. But keep your eyes on the goal.
  • Sev4/5 – Just annoying to myself or others. I’ll try to work on these because I really don’t want to annoy you. But, if I don’t fix it, deal with it. 😀

As you can tell from this list, “Do no harm” is an admirable goal in all aspects of life that I personally ascribe to. (And I feel the world would be a better place if that was just a part of the collective human psyche.) That said, you cannot focus entirely on your defects. You must keep growing. Pushing your boundaries beyond your comfort zone. Only by doing-so can an individual reach their full potential. As a direct result of this, you are inevitably going to make mistakes. And defects are going to be added to your tracker. But that is ok. Personally, I feel that at the end of my life I will be happy if my defect list is free of sev1 defects. And ecstatic if I have managed to eliminate all but the 4’s and 5’s. (Sorry folks!)