UCD, UCD+P, ICO, and PureApp

As I mentioned on my last post, lots of changes happening. I have continued to change my role and have now landed as a cloud advisor. I am excited but enough about me.

What my new role has afforded me is to continue to explore and understand the various IBM cloud software solutions out there. It is an interesting landscape and is changing faster than ever. This post delves into IBM cloud provisioning and deployment solutions but leaves the on-premises/off-premises concept OFF the table. For the most part this discussion is concerned about the ability to automatically stand up an application environment regardless of its physical proximity to developer. So before I even dive into specifics, we can talk some general capabilities. The following list is not exhaustive and there are many more capabilities that the products I talk about are capable of. These are just the ones that I am interested in for this post. 

For the purpose of this post, the definition of the deployment stack, shown here, is the total of all the layers of technology needed to be created in order for a full application to execute.

stack2

Provisioning – As I said before, in this post I am interested only in the ability to automatically stand up new environments. This concept also assumes some type of pattern or infrastructure-as-code concept to enable the automation.

Deployment – For the sake of this post, deployment refers to the automated deployment of an application including its specific application configuration applied to a provisioned environment.

GO LIVE tasks – Those tasks that must occur for an application to GO LIVE that are above and beyond simply provisioning an environment and deploying the application. Tasks such as appropriate approvals, insure the app is properly monitored, backups are in place, property security and end point compliance policies, notifications are setup when things go wrong, etc. These important tasks are part of every operations teams set of responsibilities and have a large impact on production deployments.

Pattern – The ability to capture some part or all of the stack definition in a reusable artifact.  There are two pattern capabilities that we will talk about, vSYS (virtual system) patterns and HEAT.

Let’s now take a look at the IBM tools currently in this space. Big disclaimer here. There are many many additional solutions from the IBM portfolio many of which are highly customized and include a services component. These types of solutions are highly desirable in a hybrid cloud scenario where you need brokerage services to not only serve the line of business but also the ability to manage the provisioned environments across your hybrid landscape and the ability to manage cost and charge back across that same landscape. There are outsourced solutions from IBM that target unique cloud platforms.  For the purposes of our conversation here today, we assume we have a IaaS cloud solution that we want to take advantage of. In lieu of a big description of each product, I will simply list the capabilities that it provides (there are many more but these are the ones of interest for this post).

IBM Cloud Orchestrator – provisioning, patterns, deployment, go live tasks, BPM

UrbanCode Deploy – deployment

UrbanCode Deploy with Patterns – provisioning, patterns, deployment

PureApplication – provisioning, patterns, deployment

So that doesn’t help. Lots of overlap and obviously lots of details under the covers. Each product does a thing or two very well, so let’s look at the list again and I will expand on each capability highlighting its strength(s).

IBM Cloud Orchestrator

provisioning – ICO currently can provision nodes based on both types of patterns.  ICO can provision to many virtual environment both OpenStack based and not.
patterns – ICO currently supports 2 pattern technologies, HEAT and vSYS (virtual system). The vSYS patterns are the legacy pattern type. HEAT patterns are based on OpenStack and therefore require an OpenStack implementation. ICO has full editing capabilities for vSYS patterns, however ICO does not provide an editor for HEAT patterns.
deployment – while ICO doesn’t have a separate deployment capability, you are able to build into your vSYS patterns application components and Chef scripts that can ultimately deploy applications as part of the provisioning process. However this capability is not very scalable and is precisely why deployment tools like UrbanCode Deploy were created. HEAT patterns as defined by OpenStack definition do not contain deployment-specific capabilities (more details below).
GO LIVE tasks – ICO has a large list of pre-configured integrations to common operations tools to manage your go-live tasks.
BPM – ICO has a robust BPM engine allowing you to craft an detailed process that can be initiated through a self-serve portal. This allows you to string together your provisioning, deployment, and GO LIVE tasks into a single user-driven process.

UrbanCode Deploy

deployment – UCD’s strength is application deployment including environment inventory and dashboarding.

UrbanCode Deploy with Patterns

deployment – UCD+P includes UrbanCode Deploy and relies on it to deploy application components.
patterns – UCD+P is a full HEAT pattern visual/syntax editor. UCD+P also has encorporated HEAT engine extensions that allow the heat pattern and the engine to not only provision nodes but also execute Chef recipes from a Chef server and also deploy applications using UrbanCode Deploy. The resulting HEAT pattern is truly a full stack representation as represented by the picture above in a single pattern artifact.

PureApplication

provisioning – PureApplication software has the ability to provision nodes (I will leave it at that without going into all the different flavors. For the purpose of this discussion we are interested in PureApplication software that manages PureApplication systems.)
patterns – PureApp has numerous vSYS patterns that help to automate the installation and configuration of IBM middleware. The provisioning engine is robust and can orchestrate the configuration of multiple nodes that would make up an IBM middleware topology.
deployment – in the same sense as ICO, you can add application deployment information into your patterns but the same limitations apply.

So if we cherry pick the best capabilities out of each tool, we would grab the go-live tasks and BPM from ICO, the app deployment from UCD, the HEAT pattern editing and HEAT engine extensions from UCD+P, and the IBM middleware patterns from PureApp. There, we are done. Maybe at some point in the future this will be a single PID that you can buy. But until then, is it possible to string these together in some usable way?

Again, a big disclaimer here. I am not an expert here on this entire stack but hope to drive conversation.

OK, to begin let’s take advantage of the BPM capabilities of ICO and drive everything from here. The BPM capabilities allow us to construct a process that executes the provisioning tasks and the go-live tasks with the appropriate logic. You can envision a self-serve portal with a web page that asks for specific information for a full stack deployment. Things like the app, the type of topology to deploy the app on top of, the name of the new environment, etc. ICO would need the appropriate pattern to then use to provision. Here is where ICO can “grab” the HEAT pattern from UCD+P via an integration. It will then execute the provisioning via the OpenStack HEAT engine. This HEAT engine must have the UCD+P HEAT engine extensions applied to it. Since these extensions contain application deployment capabilities, the provisioning process will also utilize UrbanCode Deploy to deploy the application components to the appropriate provisioned nodes based on the HEAT pattern. The BPM process can also call the appropriate operations products to execute the go-live tasks either before or after the provisioning step in the process. Whew!!

So what is missing? It would be great to take advantage of the PureApp IBM middleware patterns which are bar none the best and most robust installation/configuration patterns available. A general solution here would be to include the appropriate Chef recipes as part of your HEAT pattern to get the middleware installed and configured and for non-IBM middleware solutions this is your best bet. But there is a lot of orchestration involved in setting up WebSphere clusters, for example, that are not easily accomplished using Chef or Puppet. PureApp has this capability in spades. The best practice in using PureApp today is to use UrbanCode Deploy to deploy the app to a PureApp provisioned environment as the patterns in PureApp are not HEAT-based. A lot has been invested in the PureApp patterns and what the future holds as far as other pattern technologies here is uncertain today. It is important to know that PureApp is the preferred solution when it comes to provisioning systems that will utilize IBM middleware.

This is the story so far and I am sure I have holes in my description above. I welcome any feedback and experiences.

Advertisements

UrbanCode Deploy and SmartCloud Orchestrator (extended addition)

It has been awhile since I posted on the UrbanCode Deploy and SmartCloud Orchestrator integration to provide a self-service portal for one-click environment provisioning and application deployment. Part 3 finished with the idea that a simple generic process can be called to drive this entire effort. You can use the Self Service Portal of SmartCloud Orchestrator to provide the user interface for all of this.

I also brought up in Part 3 that there are some details that need to be worked out in order to make it a reality. At the time I did not have access to a system to work through those details. Since then I was able to meet with a team that has made it happen. I want to share some of those details for those that are interested.

genproc2If we look back at our generic process that was proposed in Part 3, there were three steps. Step 1 creates the environment from the blueprint. A few issues exist with the default “create environment” step that already exists. First, you may have more than 1 cloud connection. There needs to be a way to specify a cloud connection in this step. By all means you can specify a default connection but there needs to be a way to distinguish between cloud connections. The easiest way to do this is to find the UUID of the cloud connection and use that when specifying the connection.

The other area that is not covered are the node properties. Each node in your pattern may require property values. This is totally dependent on the pattern designer and how much flexibility is provided by the pattern. There are those that might argue that you should limit the variability of patterns but that requires unique patterns for each variable combination. Either way, there is most likely a need to specify node properties.

The easiest way to do this is to use a JSON formatted string to specify the node properties. There is a CLI entry called “getBlueprintNodePropertiesTemplate” that will return a JSON template that can be used to specify the required node properties for a given blueprint. Use this template as the basis for passing node parameters to the provisioning process.

To make all this happen, there is a REST API PUT call that provisions environments. Its URL looks like this: “/cli/environment/provisionEnvironment”. It takes a JSON input that specifies the application name, the blueprint name, the new environment name, the connection, and the node properties, among other things. It makes sense to me to create a new plug-in step that looks similar to the existing “create environment” step but adds 2 additional properties, the cloud connection and the node properties (in JSON format). You may need to get creative in how you populate the node properties JSON string. Since you can potentially have different node properties for different blueprints, you may need to work with your pattern designer to make things consistent. This is again where good cooperation between deployment and provisioning SMEs makes sense. If you want to expose any of these variables to the end user, you will have to make that part of the Self Service portal on the SCO end and pass those choices into the generic process as process properties.

The next step in our generic process is to Wait for Resources. While this step is easy is principle, it breaks down big time when we get to reality. Each environment can have more than 1 node. Even if you use the agent prototype name pattern you still will have trouble determining the agent names. The existing “wait for resources” plug-in step requires you to type in the resources to wait for. This does not lend itself to a dynamic and generic provisioning process.

The best approach here is to write a custom plug-in step to wait for the resources of the specific new environment that you are provisioning. And you will probably have to extend the REST client to add some additional methods. The first step is to get a list of all the resources that you need to wait for. You can use the “/rest/resource/resource” REST API GET method to get a JSON of all resources. You will have to parse this using the environment name (and potentially the base resource) to get all of the resources that are part of the environment. Once you get that list, you can use the “cli/resource/info?resource=” REST API GET call to retrieve a JSON of the resource status. if the “status” field shows “ONLINE”, then your resource is up and running. Another property you may want to create for this plug-in step is a wait timeout value. Waiting forever doesn’t make much sense and you would like to know if things go wrong. Building in timeout logic will insure you get some notification back whether things to well or not.

The final step is the Run Application Process step. We should be able to use the default one here.

I am hoping to post all the code at some point for this solution, but until I get approval you will have to be happy with what is here. I hope this provides some additional ammo for making that self service portal a reality.

UrbanCode Deploy and SmartCloud Orchestrator (Part 3)

In Part 1 we connected to the cloud provider and created a resource template from a virtual system pattern. In Part 2 we created an application blueprint from the resource template and mapped the application components to the blueprint. We then created a new application environment from the blueprint and the new virtual system was created. Once the environment is created and the agents come online, you can now execute your normal application deployment process onto the newly provisioned environment.

In Part 3, we will go one step further and explore what it would take to satisfy the ultimate goal of providing a true self-service environment creation mechanism for developers. Exploring this a bit further, let’s take a closer look at the use case.

The promise of cloud is that you can have readily available systems at the drop of a hat, or at least much much faster than ever before. As a developer, I have some new code that I have developed that I want to test in an isolated environment (we will explore some of the subtle but challenging details behind this idea at the end). It would be awesome if I could go to some portal somewhere and request a new environment be provisioned and my application deployed to that environment without any need for knowledge of SCO or UrbanCode Deploy. Well, this capability exists today.

To begin with, SmartCloud Orchestrator has a robust business process engine that allows you to create self-service capabilities with no need to understand what is under the covers. I have no experience in this but have seen thesco results. You can create processes and human tasks that can be executed from the SCO website. You then categorize your various self-serve processes.

The good part about this is that you have at your disposal a full development environment and run-time that can utilize existing programming concepts. Of course we will have to take advantage of the UrbanCode REST API or command line to be able to drive a deployment process.

Before going on, I want to confess that I have not had the opportunity to get this entire flow working from A to Z. I haven’t had access to a SCO environment and enough free time to make this all work. However, I am putting it out there because I believe this is doable.

In order to satisfy our desire to have a fully provisioned environment with an application deployed, we need to setup a process that can do the job. We can use a generic process to get our work done. There is a REST API call that can kick off a generic process and therefore our SCO self-service process can use this to drive this process. In principle, our generic process can look something like this:

genproc2

 

The first step is to provision the environment. This step requires the environment name, the application name, and the blueprint name. These must be passed into this process and therefore you need to make process properties that can be referenced by this step. NOTE: When we provisioned an environment using the UrbanCode Deploy GUI, it asks us for information that the cloud provider needs. I am not sure how that info is passed here. There is a new command line option called getBlueprintNodePropertiesTemplate, and its descriptions says that it “returns a JSON template of the properties required to provision a blueprint.” This would need to be used and populated to insure that all of the information is passed to the environment creation process. You might need to extend the create environment step to interrogate the blueprint, get the necessary properties, and insure they are all populated.If anyone out there has tried this, let me know what you find.

The other challenge we have here is that we need to insure the environment name is unique. There is an option in this step to insure that the environment name is unique and the plug-in step simply appends a random string to the end of the environment name. This poses a problem for the next step.

Step two is to wait for the environment to be provisioned. We need to wait for the resources (agents) that will come online once the provisioned nodes are spun up. If you remember, the agent names will follow a pattern. However, if we allow the previous step to make the environment name unique, we will not be able to predict the agent names. Therefore, our self-service call to this process needs to specify the environment name and insure it is unique.

Secondly, we need to somehow determine how many agents to wait for an their exact names. This will be a challenge and as of right now I am not sure how I would solve it. This would most likely require a new plug-in to be able to interrogate a blueprint, get the names of the agent prototypes, and then construct the list of agents to wait for. Again, some plug-in development is required here.

Once the agents have come up, we can now deploy our application. This step is easy enough and we can call a well known deploy process for the application to do the installation. But there is another challenge here. Deployments can either be done using a snapshot, or you have to specify the versions of each component. Snapshots are easy, but if we remember back to our original idea, a developer has some new code that he/she wants to test. Typically snapshots are not created until a group of components have been tested together. So we have a choice. We can either have the environment provisioned from an existing snapshot, and then have to manually add our own updates to test. Or we have to provide some mechanism to update/copy an existing snapshot to include a baseline plus the new stuff. This could take on lots of possibilities, but a thorough examination of the use case would be required in order to develop a solution. This also may not be the use case we care about.

One additional solution would be to go a much more custom route and write an application that does this instead of relying on existing plug-in steps. The REST API and command line API are very rich and we can ultimately get to and produce all of the information we need. But it is nice to rely on existing capabilities and processes are much more robust and flexible than a custom solution. But as we have seen above, there are enough nuances in this effort that will require some plug-in extensions or creation that it might make sense to go the fully custom application route.

Happy self-service!!! Let me know if anyone takes on this challenge.

UrbanCode Deploy and SmartCloud Orchestrator (part 1)

The promise of self-serve environments is closer than ever, and by combining UrbanCode Deploy and SmartCloud Orchestrator, you can pretty much achieve utopia. So let’s examine the integration.

The job of SmartCloud Orchestrator is to build virtual patterns that are used to spawn virtual systems.  The beauty of this solution is that these patterns can represent the established deployment platforms for various technologies. For example, you can build a pattern that represents the standard web application topology for a given enterprise. This makes it easy for application teams to understand what they will be provided and what they must target for their applications.  It also makes the self-serve environment story achievable.  I simply spin up a pattern and I have an environment ready to go.

But let’s examine in a bit more detail the application stack and the responsibilities of the cloud solution and the application deployment solution.  stackThe job of the cloud solution is to utilize its virtualization environments to create nodes with specified storage, memory, and processors, install the OS on top of the raw node, and also install the various middleware solutions that are necessary. The interesting part is the middleware configuration. It is my contention that the cloud solution should provide a configured middleware solution ready to accept an application. That means that, using a WebSphere example, that WAS is installed on all of the nodes, the necessary profiles are created, the node agents are installed across all nodes and connected back to the deployment manager, and clusters are created (if necessary). This is the value of the orchestration part of SCO. The WAS configuration can do nothing in this state but it is ready to accept an application.  It is now UrbanCode Deploy’s job to install and configure the application onto the resulting virtual system.

There are a few things to keep in mind when creating virtual system patterns.  Work with your pattern architect to insure that these items are accomplished.  First, you need to have the agent installed onto each node. Included in the UrbanCode Deploy plugins zip file is an agent package that can be added to SCO and then added to each node in your pattern. The agent name will be specified later, but the UrbanCode Deploy server and port need to be hard coded into the pattern.  Also, insure that the agent installation is done as close to the end of the node stack as possible. Once the agent is installed and running and comes online in the UrbanCode Deploy server, you can then assume your node is ready to go.  Finally, work with your pattern architect and insure he/she understands the details that are important to you. For example, you may need to know the installation location of your middleware solution.  That location is important to the installation process and it should be made known to the pattern engineer that it can’t be changed without some notification.

So now let’s assume our pattern is ready to go and meets our needs.  We can now integration UrbanCode Deploy with SCO to quickly and easily create environments. Step one with UrbanCode Deploy is to make a connection to our cloud provider.  This is easily done via the Resources tab in UrbanCode Deploy.  Get the credentials from your cloud administrator.  You need a user with permissions to view and instantiate virtual system patterns.

cc

Once you have your cloud connection, you can then create a resource template.  Resource templates is the integration point between SCO and UCD.  A resource template represents the virtual system pattern in UrbanCode Deploy.  The resource template is where you spend your time thinking about how this pattern will be used by applications in UrbanCode Deploy.  rtCreating the initial resource template from SCO is easy.  In the resource template page you have the option to import a resource template from the cloud. Using your existing cloud connection you can interrogate SCO for its list of virtual system patterns. Pick the one you are interested in. Once completed, you get a simple template with each node of the template represented by an agent prototype.

Notice the agent prototype names in the resulting template below.  These names may mean something to a pattern engineer, but don’t mean much to an application architect. This is another area that you can work with your pattern engineer to provide meaningful node names.

rt2

 

Now is when you need to think about your deployment processes that will use this pattern and flesh out the template to that it is usable by applications. The first step will be to better organize your template. Application teams will need to map their application components to the resource template via an application blueprint. Therefore, make it easy on them and create folders that hold the agent prototypes. These folders will serve two purposes. The first being that it will help categorize the nodes and make it clear the purpose of each node. In our example above there are two nodes that look exactly alike, but actually one has a middleware solution installed on it and the other has a database installed. There is no way to tell via their names. Use folders as a way to organize your template, something like this.

rt3

I also added a top level folder in my example here.  I should have named it something better, but when an environment is created from this template, it will become the top level folder in your resource tree.  Name it something valuable.

The other reason for adding a folder structure to your resource template is to be able to include properties as part of your template. This is where you can make your template valuable to applications.  Remember that this single template can be used by many applications. After all, a standard topology should be the default used by all applications of a particular technology type. Put your deployment process designer hat on and think of the properties that will be needed by a deployment process. For example, you may want to expose an install location for a middleware solution. Maybe a URL for the deployment manager of Tomcat, for example. Maybe a port number needs to be exposed. Put those properties for each node in the folder that holds the agent prototype. The resource properties are then readily available to any component process that utilizes this template.

Also at the top level folder (Top in my example), I typically chose to include properties that identify the agent prototype names for all nodes involved in the pattern. It is typical in a multi-node situation where you will need the IP address of the database node, for example, as part of the deployment process to the app server node. By having the agent name readily available as a property, you can easily interrogate the agent properties in a single step to find its IP address property.

In the next installment of this series, we will map an application to this resource template using an application blueprint. Happy deploying!!

 

Platform as a Service – Built-in DevOps

I like to keep myself in tune with what is going on in world with all things DevOps, so I frequent a few places (the LinkedIn DevOps group, DevOps.com, etc.).  There are lots of good discussions and topics out there.  These types of fast moving sites are a must to keep up with the world.  From a technical standpoint the topics usually center around the various tools and techniques involved in automation.   There is no arguing the fact that many shops out there that are embracing DevOps start at the low technical level and work their way up.  I call this Startup DevOps (I doubt I can take credit for this term).  Most startups have very smart people and very little bureaucracy to cut through.  Get the job done faster and everyone is happy.   Using tools like Chef, Puppet, Vagrant, Glu, Jenkins, GIT, RunDeck, Fabric, Capistrano, CFEngine, yada yada yada you can get the job done.  You can craft a very significant and powerful set of automation at very little cost (open source) and provide the fast moving infrastructure to handle the fast moving pace of startups.

Being from IBM, I tend to look at things a bit differently.  Most of the customers I deal with are at the other end of the spectrum.  With IT departments having staffs in the many thousands, there is bureaucracy at every turn.  Large enterprises like this tend to spend money with IBM (and others like us) to transfer risk.  Spend umpteen million with IBM and you have to only look in one direction to point the finger.  So IBM tends to create products that cater to these types of clients.  I use the term Enterprise DevOps for this situation (again, can’t take credit for the term).

IBM is spending billions (yes with a B) on solutions that cater to these types of customers.  Cloud solutions is where the bulk of the effort is focused these days.  IBM offers quite a bit of choice here.  If you want private cloud, IBM has Pure Application Systems and SmartCloud Orchestrator that provide the Infrastructure as a Server (IaaS) capabilities.  Managing Servers, Storage, and Network in an incredibly flexible way is what this is all about.  IBM also has a public cloud offering in Soft Layer.  Let IBM manage your infrastructure and you don’t need a data center anymore.  Nice.

Platform as a Service (PaaS) is the next big thing.  IBM is now introducing the ability to assemble a platform dynamically and provide all of the plumbing in connecting those platform pieces in an automated way.  We have even connected our DevOps in the Cloud solution (JazzHub) with the IBM PaaS solution (BlueMix) in a way that offers a true cloud-based development environment that will automatically deploy to your PaaS infrastructure all without lifting a finger.  By the way, take a look at this short YouTube video to get a quick overview of the landscape.

Let’s take a bit closer look at BlueMix and JazzHub and see what I mean.  First, BlueMix allows you to create an infrastructure by assembling services.  You can start with some boilerplate templates that have already wired together infrastructure and services.  For example, the Java + DB Web Starter gives you a WAS Liberty Profile server and a DB2 database, all installed and ready to go.  This boilerplate gives you a sample application that runs as soon as  you server starts.  You get a zip of the source code (we will visit this again later).

bluemix1

Or you can build up your own infrastructure.  First, choose from a list of runtimes.

Bluemix2

And then add services to you infrastructure.

Bluemix3

In my case after a few clicks and less than a minute later I had a server with WAS Liberty and DB2 deployed and running the sample application.  I didn’t need a sysadmin to build me a server.  I didn’t need a DB administrator to install DB2 and create a database for me.  I didn’t need accounts created or ports opened.  All done seamlessly under the covers.  Point and click infrastructure assembly.  DevOps to the max.

But we need to develop our application (or enhance the boilerplate app), so we need a development environment. IBM offers JazzHub, a cloud-based development infrastructure.  JazzHub allows you to create a project that provides change management and source config management already setup and ready to go.

First, pick you source code management solution, Jazz or GIT.

jazzhub1

Next, add some additional services, like auto-deploy to a BlueMix infrastructure.

And we have a project all set to go.  I can invite others to join my project and we can develop in the cloud as a team.  Here I have loaded the sample application source code into my JazzHub project.  I can modify the code right here if I want and push that code into my GIT master branch.

jazzhub3

Or better yet, I can use Eclipse to develop my application using an IDE.  I have connected to my GIT repository and pulled the code down into my workspace.  I can use the GIT plugin to commit changes I have made to the GIT repository.

eclipse1

 

And to tidy things up nicely, by turning on auto-deploy in my JazzHub project, every new push to my GIT repository by my team causes an immediate deployment to my BlueMix infrastructure.

jazzhub4

Holy continuous delivery.  There is an awful lot of things going on under the covers here.  But like I said above, you are offloading risk to you PaaS solution.  The interesting thing is that the price is relatively not that big.  With subscription type pricing you get this solution relatively cheap.  (Note: I am not in sales so don’t ask me for a pricing quote).   Customers now have a choice in pursuing their DevOps goals.  You can build from within by hiring smart people that have experience in the myriad of ever-changing open source DevOps tools, automate as much of the infrastructure creation and platform connectivity on your own, and hope that your smart people don’t get hit by a bus.  Or you can subscribe to a PaaS solution like this one (or others out there) and to steal a Greyhound slogan, “leave the driving to us.”

I made this sound very simple and we know that there are lots of factors involved in determining the direction you go.  Some industries have a hard time with anything located outside of their walls due to regulatory issues or simply a fear of lack of control.  Some of the PaaS solutions will have on-premises options to allow you to bring the solution into your data center but your users won’t know the difference.  We all know that simple projects like this are not always the case.  The complex project portfolio of a large IT organization may require complex infrastructure that a PaaS solution cannot support.  But we are getting closer and closer to PaaS being a reality and I find it hard to believe that this isn’t a viable solution for a good portion of any typical IT application portfolio.

Taking Continuous Delivery to the Max

Now that we have some large UrbanCode customers under our belt, we can now look at some of the metrics involved in deploying a continuous delivery solution like IBM UrbanCode Deploy.  There are definitely some hard measurements that can be taken. You can easily look at a simple metric like the time it takes to perform deployments.  Time savings is the easy valuable result the comes from automation.  Don’t forget to take into account the amount of time it takes to create the automation, but once it is in place, the more times it is utilized the bigger return on that investment.  Over the landscape of an enterprise and a duration of a year or two, your investment in automation in a continuous delivery solution can pay for itself.

But let’s be honest, automation has been around for years and no sys-admin is on the job for more than a day without building a script to automate something.  Automation has always been a valuable component to deployments.  Using a continuous delivery solution helps to capture that automation into reusable chunks so that it can be extrapolated across the enterprise.  But I will say that I have run across some organizations that have been pretty good at this before “continuous delivery” was first uttered.

tent-center-poleSo what are some other long poles in the deployment tent?  I once consulted at a customer that had a testing data center that so large that you literally couldn’t see the opposite wall.  There was more hardware in that room (and consequently more needed power and cooling) than I had ever seen. Despite this, it was a 6 month wait to get an available test environment.  You would think that with that much computing power under one roof that you would have immediately available systems.  However, at any given time more than half of the systems in that room were in “transition” from one testing environment to another.  The process of provisioning an environment for a specific application (at that time) took a lot of manual labor and your request was put on a queue that took time to get to.

So to me to get the biggest bang for your buck in continuous delivery is combining deployment automation with system provisioning.  And taking it even a step further, provisioning a physical system is one thing, but Cloud solutions bring even more to the value proposition by removing the need for physical deployment targets.

Improvements to IBM UrbanCode 6.x have been made to help bring integration with provisioning and Cloud as a standard capabilility.  I will spend more time in a future post or two on this, but here is the high level process.

1.  Prepare the Cloud – a deployment pattern is created in the cloud catalog.  This pattern specifies the process of creating an infrastructure for an application.  The pattern codifies enterprise standards and insures consistent infrastructure.  Part of the pattern should be installation of the IBM UrbanCode Deploy agent.  When the nodes of the pattern are booted up they will communicate with the UrbanCode Deploy server.
2.  Import the Cloud pattern into UrbanCode Deploy – this will create a new Resource Template that has an Agent Prototype for each node in the pattern.  Properties that need  to be specified for the pattern are captured as UrbanCode Deploy properties.
3.  Create a new Application Blueprint that specifies the Resource Template created above.  The blueprint binds application information (components) to the Agent Prototypes in the template.
4.  Now create a new Application Environment based on the Application Blueprint.  You specify your Cloud connection properties as well as any properties needed by the Cloud pattern.

The result of all of this is a newly provisioned Cloud environment with your application deployed to it.  Nice.

In a future post or two, I will go into some of the specifics of this solution.  But needless to say, the value proposition of this solution is the promised land of continuous delivery.