New York: +1 (646) 564 5679 | Los Angeles: +1 (310) 314 2558

DevOps Articles

On the Culture of DevOps Part 1 – Origins

Written by Stephane Odul (linkedin.com/in/sodul)

If you work in the software industry you must have heard the term DevOps by now. It is one of the hot words in the Silicon Valley at the moment and it seems that everyone wants to do DevOps. Unfortunately just like Agile which is often times a cool buzzword to justify lack of planning and pretend that the chaos is designed, the DevOps term is misused more often than not. The “we need to do <buzzword>” attitude is not new and that behavior was highlighted as far back as in 1995 by Dilbert with the mauve SQL database. There are two most common inaccurate definitions of DevOps in job postings. The first one is “Operations that Develops”, usually seen as a System Administrator writing Ansible, Puppet, Chef or Salt Stack script as part of his duties. This is actually not new as Puppet has been used by Operations since 2005, and System Administrators have been writing Shell Scripts to automate their tasks for decades. The second common  definition is “Developer that works in Operations”, a Software Engineer  that is tasked with all Operational duties, but often times lack the experience and has a different mindset than a true Operations veteran. While these positions are not entirely incompatible with DevOps, their implementation is definitely not in its spirit. So what is DevOps?

DevOpsDays

DevOps was coined in 2009 for the DevOpsDays conference in Ghent, Belgium. It was “The conference that brings development and operations together”, note that it was not called the conference that “changes the way to do operations”, no it was about “development and operations”, “together”. DevOps is not and cannot be a one side change of how you manage operations, and if anything is mostly a cultural change that must happen on the development side of things. Nowadays the DevOpsDays organization has grown and has dozens of events each year all around the world.

Scaling Vertically, the Old Fashion Way

Traditionally Operations is on the receiving end of Development and has extremely limited influence and input on the architecture and features delivered. This is often requirements such as the hardware architecture needed to run the software, the OS, RAM, CPU, etc… the Operations team then take the requirements, estimate the cost to obtain the resources and runs with it until scaling up is physically impossible.

In the traditional model the development team often used a Waterfall model where changes were infrequent and where a QA team would spend significant resources testing each version before the hand-off to Operations. In the mid 90s the focus was about running Enterprise Software on expensive mainframe computers where designing your product as a monolithic service made sense and was the norm. Highly expensive server hardware was popular and so were Sun, Silicon Graphics, Cray, IBM, and a myriad of others. With a model of vertical scaling and monolithic software designs the long release cycles and Responsibility Silos  were appropriate.

Commodity Hardware

Some of the software development fundamental changes can be found in the rise of personal computers, then the rise of the Internet. Google spearheaded the move away from vertical scaling by embracing commodity hardware, and even created a market for cheap, defective, RAM modules to use in their datacenters at the time. Expectations on software quality were also very different and it was common for computers to crash daily with the infamous BSOD to the point that Plug and Play was nicknamed Plug and Pray. Google also changed these expectations by providing excellent uptime: when was the last time google.com did not work for you? Their initial success was based on using low cost hardware and have the software developers build fault tolerance in the software stack. This is engraved today in Google’s DNA, since failure is inevitable, architect your software to be resilient and to scale horizontally. In this model single instance failure is a non event and nobody get paged in the middle of the night because one disk got full.

Instant Gratification

The rise of broadband and then smartphones has also changed expectations. Users are now always connected, and Instant Gratification is more important than ever. If your online service does not work right now, your customers will just move on and probably never come back. Competition is fierce and in the new economy where you must catch up and then one up your competitors, staying behind for years, months or even weeks can be the death spell for your business. On the development side this has quickly lead to the rise of Agile, Scrum and other processes enabling rapid development cycles. All these combined changes created the conditions where the traditional silos between Development, QA and Operations would no longer be efficient. To enable the fast pace of iteration the Continuous Integration, Continuous Delivery and ultimately Continuous Deployment can provide, the walls need to be broken up to enable direct, two way communication, collaboration and achieve immediate results.