The Broad Brush of DevOps

Having been in the software development business for 25 years now, it is truly amazing to see how many old ideas are brushed off, polished up, and re-branded as something new.   Or taking a “new” concept that begins to take off and broadly applying it to other things.  DevOps is great example.

liftarn_Paint_can_with_brushDevOps, at its foundation, is a set of guiding principles that you can apply to the collective SDLC (software development lifecycle) to achieve some real benefits.  At its roots, DevOps is all about breaking down the silos between development and operations (see Systems Thinking in The Phoenix Project).  All good.  Not rocket science, but something worth striving for.  But how new are the concepts, really?  One of the websites I frequent ( uses the phrase “helping finish what agile development started” as a subtitle.  I love that sentiment.  Agile development provided new ways of looking at software development to achieve better results. Much to the contrary of some managers I have had, agile is not rocket science.  The biggest hurdle for some to get over was simply that it was different.  DevOps is again doing the same thing.

In today’s DevOps “market”, you can most likely find tools and processes across the SDLC that have been re-branded as part of a DevOps solution.  IBM is a master of this and rightly so.  Re-branding is a tried and true way of re-inventing offerings to make them appealing to a new audience.  But let’s not get caught up in the hype.   Implementing a change management solution is still a good thing to do, regardless of whether it falls under the DevOps umbrella or not.  Automating builds has always been and will continue to be a good thing, even when the next buzz-word craze comes along.

The great thing about this DevOps wave to me is that it has simply refocused the spotlight on some areas of the SDLC that have traditionally been under the radar and under appreciated.  There were always the guys that could make it happen, that wrote and maintained the magic scripts, and that utilized special skills that no one else had.  The Phoenix Project helped to highlight the fact that these guys can be heroes and also the bad guys when it comes to mission critical deployments. How to properly utilize “Brent” to take advantage of his knowledge but not to make him a bottleneck is an important lesson to learn.  Every organization has a few Brents, and the DevOps wave (and thanks to the book) have helped to elevate the need to capture Brent’s knowledge into repeatable automation.  A continuous delivery solution like IBM UrbanCode Deploy can make many Brents who are available 24/7.

So be prepared for the onslaught of marketing campaigns that now re-brand every software development tool and process to be “Your DevOps Solution”.  But speaking as a former “Brent”, I am glad to see the Brents of the world being in the spotlight.


Taking Continuous Delivery to the Max

Now that we have some large UrbanCode customers under our belt, we can now look at some of the metrics involved in deploying a continuous delivery solution like IBM UrbanCode Deploy.  There are definitely some hard measurements that can be taken. You can easily look at a simple metric like the time it takes to perform deployments.  Time savings is the easy valuable result the comes from automation.  Don’t forget to take into account the amount of time it takes to create the automation, but once it is in place, the more times it is utilized the bigger return on that investment.  Over the landscape of an enterprise and a duration of a year or two, your investment in automation in a continuous delivery solution can pay for itself.

But let’s be honest, automation has been around for years and no sys-admin is on the job for more than a day without building a script to automate something.  Automation has always been a valuable component to deployments.  Using a continuous delivery solution helps to capture that automation into reusable chunks so that it can be extrapolated across the enterprise.  But I will say that I have run across some organizations that have been pretty good at this before “continuous delivery” was first uttered.

tent-center-poleSo what are some other long poles in the deployment tent?  I once consulted at a customer that had a testing data center that so large that you literally couldn’t see the opposite wall.  There was more hardware in that room (and consequently more needed power and cooling) than I had ever seen. Despite this, it was a 6 month wait to get an available test environment.  You would think that with that much computing power under one roof that you would have immediately available systems.  However, at any given time more than half of the systems in that room were in “transition” from one testing environment to another.  The process of provisioning an environment for a specific application (at that time) took a lot of manual labor and your request was put on a queue that took time to get to.

So to me to get the biggest bang for your buck in continuous delivery is combining deployment automation with system provisioning.  And taking it even a step further, provisioning a physical system is one thing, but Cloud solutions bring even more to the value proposition by removing the need for physical deployment targets.

Improvements to IBM UrbanCode 6.x have been made to help bring integration with provisioning and Cloud as a standard capabilility.  I will spend more time in a future post or two on this, but here is the high level process.

1.  Prepare the Cloud – a deployment pattern is created in the cloud catalog.  This pattern specifies the process of creating an infrastructure for an application.  The pattern codifies enterprise standards and insures consistent infrastructure.  Part of the pattern should be installation of the IBM UrbanCode Deploy agent.  When the nodes of the pattern are booted up they will communicate with the UrbanCode Deploy server.
2.  Import the Cloud pattern into UrbanCode Deploy – this will create a new Resource Template that has an Agent Prototype for each node in the pattern.  Properties that need  to be specified for the pattern are captured as UrbanCode Deploy properties.
3.  Create a new Application Blueprint that specifies the Resource Template created above.  The blueprint binds application information (components) to the Agent Prototypes in the template.
4.  Now create a new Application Environment based on the Application Blueprint.  You specify your Cloud connection properties as well as any properties needed by the Cloud pattern.

The result of all of this is a newly provisioned Cloud environment with your application deployed to it.  Nice.

In a future post or two, I will go into some of the specifics of this solution.  But needless to say, the value proposition of this solution is the promised land of continuous delivery.