UCD, UCD+P, ICO, and PureApp

As I mentioned on my last post, lots of changes happening. I have continued to change my role and have now landed as a cloud advisor. I am excited but enough about me.

What my new role has afforded me is to continue to explore and understand the various IBM cloud software solutions out there. It is an interesting landscape and is changing faster than ever. This post delves into IBM cloud provisioning and deployment solutions but leaves the on-premises/off-premises concept OFF the table. For the most part this discussion is concerned about the ability to automatically stand up an application environment regardless of its physical proximity to developer. So before I even dive into specifics, we can talk some general capabilities. The following list is not exhaustive and there are many more capabilities that the products I talk about are capable of. These are just the ones that I am interested in for this post. 

For the purpose of this post, the definition of the deployment stack, shown here, is the total of all the layers of technology needed to be created in order for a full application to execute.

stack2

Provisioning – As I said before, in this post I am interested only in the ability to automatically stand up new environments. This concept also assumes some type of pattern or infrastructure-as-code concept to enable the automation.

Deployment – For the sake of this post, deployment refers to the automated deployment of an application including its specific application configuration applied to a provisioned environment.

GO LIVE tasks – Those tasks that must occur for an application to GO LIVE that are above and beyond simply provisioning an environment and deploying the application. Tasks such as appropriate approvals, insure the app is properly monitored, backups are in place, property security and end point compliance policies, notifications are setup when things go wrong, etc. These important tasks are part of every operations teams set of responsibilities and have a large impact on production deployments.

Pattern – The ability to capture some part or all of the stack definition in a reusable artifact.  There are two pattern capabilities that we will talk about, vSYS (virtual system) patterns and HEAT.

Let’s now take a look at the IBM tools currently in this space. Big disclaimer here. There are many many additional solutions from the IBM portfolio many of which are highly customized and include a services component. These types of solutions are highly desirable in a hybrid cloud scenario where you need brokerage services to not only serve the line of business but also the ability to manage the provisioned environments across your hybrid landscape and the ability to manage cost and charge back across that same landscape. There are outsourced solutions from IBM that target unique cloud platforms.  For the purposes of our conversation here today, we assume we have a IaaS cloud solution that we want to take advantage of. In lieu of a big description of each product, I will simply list the capabilities that it provides (there are many more but these are the ones of interest for this post).

IBM Cloud Orchestrator – provisioning, patterns, deployment, go live tasks, BPM

UrbanCode Deploy – deployment

UrbanCode Deploy with Patterns – provisioning, patterns, deployment

PureApplication – provisioning, patterns, deployment

So that doesn’t help. Lots of overlap and obviously lots of details under the covers. Each product does a thing or two very well, so let’s look at the list again and I will expand on each capability highlighting its strength(s).

IBM Cloud Orchestrator

provisioning – ICO currently can provision nodes based on both types of patterns.  ICO can provision to many virtual environment both OpenStack based and not.
patterns – ICO currently supports 2 pattern technologies, HEAT and vSYS (virtual system). The vSYS patterns are the legacy pattern type. HEAT patterns are based on OpenStack and therefore require an OpenStack implementation. ICO has full editing capabilities for vSYS patterns, however ICO does not provide an editor for HEAT patterns.
deployment – while ICO doesn’t have a separate deployment capability, you are able to build into your vSYS patterns application components and Chef scripts that can ultimately deploy applications as part of the provisioning process. However this capability is not very scalable and is precisely why deployment tools like UrbanCode Deploy were created. HEAT patterns as defined by OpenStack definition do not contain deployment-specific capabilities (more details below).
GO LIVE tasks – ICO has a large list of pre-configured integrations to common operations tools to manage your go-live tasks.
BPM – ICO has a robust BPM engine allowing you to craft an detailed process that can be initiated through a self-serve portal. This allows you to string together your provisioning, deployment, and GO LIVE tasks into a single user-driven process.

UrbanCode Deploy

deployment – UCD’s strength is application deployment including environment inventory and dashboarding.

UrbanCode Deploy with Patterns

deployment – UCD+P includes UrbanCode Deploy and relies on it to deploy application components.
patterns – UCD+P is a full HEAT pattern visual/syntax editor. UCD+P also has encorporated HEAT engine extensions that allow the heat pattern and the engine to not only provision nodes but also execute Chef recipes from a Chef server and also deploy applications using UrbanCode Deploy. The resulting HEAT pattern is truly a full stack representation as represented by the picture above in a single pattern artifact.

PureApplication

provisioning – PureApplication software has the ability to provision nodes (I will leave it at that without going into all the different flavors. For the purpose of this discussion we are interested in PureApplication software that manages PureApplication systems.)
patterns – PureApp has numerous vSYS patterns that help to automate the installation and configuration of IBM middleware. The provisioning engine is robust and can orchestrate the configuration of multiple nodes that would make up an IBM middleware topology.
deployment – in the same sense as ICO, you can add application deployment information into your patterns but the same limitations apply.

So if we cherry pick the best capabilities out of each tool, we would grab the go-live tasks and BPM from ICO, the app deployment from UCD, the HEAT pattern editing and HEAT engine extensions from UCD+P, and the IBM middleware patterns from PureApp. There, we are done. Maybe at some point in the future this will be a single PID that you can buy. But until then, is it possible to string these together in some usable way?

Again, a big disclaimer here. I am not an expert here on this entire stack but hope to drive conversation.

OK, to begin let’s take advantage of the BPM capabilities of ICO and drive everything from here. The BPM capabilities allow us to construct a process that executes the provisioning tasks and the go-live tasks with the appropriate logic. You can envision a self-serve portal with a web page that asks for specific information for a full stack deployment. Things like the app, the type of topology to deploy the app on top of, the name of the new environment, etc. ICO would need the appropriate pattern to then use to provision. Here is where ICO can “grab” the HEAT pattern from UCD+P via an integration. It will then execute the provisioning via the OpenStack HEAT engine. This HEAT engine must have the UCD+P HEAT engine extensions applied to it. Since these extensions contain application deployment capabilities, the provisioning process will also utilize UrbanCode Deploy to deploy the application components to the appropriate provisioned nodes based on the HEAT pattern. The BPM process can also call the appropriate operations products to execute the go-live tasks either before or after the provisioning step in the process. Whew!!

So what is missing? It would be great to take advantage of the PureApp IBM middleware patterns which are bar none the best and most robust installation/configuration patterns available. A general solution here would be to include the appropriate Chef recipes as part of your HEAT pattern to get the middleware installed and configured and for non-IBM middleware solutions this is your best bet. But there is a lot of orchestration involved in setting up WebSphere clusters, for example, that are not easily accomplished using Chef or Puppet. PureApp has this capability in spades. The best practice in using PureApp today is to use UrbanCode Deploy to deploy the app to a PureApp provisioned environment as the patterns in PureApp are not HEAT-based. A lot has been invested in the PureApp patterns and what the future holds as far as other pattern technologies here is uncertain today. It is important to know that PureApp is the preferred solution when it comes to provisioning systems that will utilize IBM middleware.

This is the story so far and I am sure I have holes in my description above. I welcome any feedback and experiences.

Advertisements

UrbanCode Deploy with Patterns and its Integration with UrbanCode Deploy

Well it has been quite a while since I have posted and lots has happened in the past 3 months. The re-organization of IBM has caused some internal shakeup but I have landed in the Cloud group which still is the home of the IBM DevOps solution. So I get to keep my current trajectory. With that though I have tried to expand my horizons and learn some new things. I have started to look at UrbanCode Deploy with Patterns more closely. It will play a key role in the IBM Cloud software offerings and will be the center point of HEAT document editing.

To back up a bit, HEAT is the OpenStack project focused on orchestration. The mission of the OpenStack Orchestration program is to create a human- and machine-accessible service for managing the entire lifecycle of infrastructure and applications within OpenStack clouds. In other words, HEAT provides the ability to define full stack application definition in a human readable artifact. The HEAT engine then reads the artifact and farms out the necessary tasks to other parts of the Cloud infrastructure to fulfill the parts as defined in the artifact.

UrbanCode Deploy with Patterns (UCD+P) provides a rich editor that allows you to craft HEAT documents in a visual way. It also defines some custom extensions to HEAT that allow some additional capabilities that are necessary to provide full stack functionality. Let’s examine in more detail.

UCD+P provides extensions that allow you to assign components from UrbanCode Deploy (UCD) to nodes in the blueprint. The component drawer of the resource pallet provides the list of components available to the user that can be applied to nodes.

comp_drawer

The entries in this drawer come directly from the UCD integration. The first time you drag a component onto the blueprint canvas you are presented with a dialog. This dialog is to confirm the application that you want to associate to the blueprint. In UCD you will end up with a new environment of the application you choose once the blueprint is provisioned. Remember that components can be included in more than one application and therefore this dialog allows you to choose which one.

app_dialog

 

 

 

Here is a picture of a node after a few components have been assigned to a node.

blueprint

Now let’s switch over to the blueprint source view and take a look at the UCD information that is included. First we see the blueprint parameters. Notice the UCD values that are necessary when provisioning the blueprint.params

 

Next we can look at the component information. We see two different resources defined. The war file deploy component defines the type of server it is being installed on. The version property is important. UCD+P defaults to the LATEST component version. This is fairly unrealistic. There are most likely a limited amount of scenarios where the LATEST version of all components are installed. More likely would be a snapshot. Unfortunately as of the writing of this post there is no capability to specify a snapshot name. This would be easy to add to the “choose an application” dialog since snapshots are associated with applications. This will be my first UCD+P enhancement request. I encourage anyone reading this to also chime in on this RFE.

A few other details to note are found in the component configuration resource. Here we see the name of the component and the component process to use. Note that the process name defaults to “deploy.” I believe it assumes you will have a process named “deploy.” You can change it to something else if you want to use another defined process. This again would be easy to add to the user interface when dragging it onto the blueprint canvas. But this one isn’t as critical as users typically define a default deployment process named “deploy.” Also note the input property that is required (JKE_DB_HOST). This is defined in UCD as a component environment property and is picked up via the UCD integration. It is not clear if, for example, I have a process property that is required or some other property whether those will be picked up as well. Note that this property can be associated with a UCD+P property that is then gathered at provision time.

comp

 

The UCD+P integration to UCD is very powerful and these HEAT extensions allow us to define both storage, compute, and networking information along with UCD component info in the same blueprint making this a true full stack artifact.

Now I am sure you can image some things that are missing in this picture. I will give you some hints. First, it looks to me in the example that I have shown that the middleware installation (and non-application configuration) is being performed by UCD. I am not sure this is the best place to perform this type of task and maybe there is a better way to do it. Also, it won’t take long in talking with an operations team some additional activities that are necessary in a production “Go Live” scenario. Stay tuned to some additional posts on things coming down the pike to fill in some of the gaps in the full stack picture.

RTC Build Traceability with UrbanCode Deploy

There have been great developments in the past 12 months or so to improve the traceability between builds managed by RTC and UrbanCode Deploy.

  1. RTC build definitions now have built-in support for post-build deployment steps. If you use the Jazz Build Engine as your build engine, you can specify connection information and component information in the build definition itself to allow the build process to push a new version of the build result into a new component version. This is great and makes it simple and easy to accomplish getting your builds to the deployment engine. See Freddy’s blog entry from about a year ago on this very topic for more details. There are of course some limitations to this feature. It assumes you want to push your build result into a single UrbanCode Deploy component. This is fairly unrealistic in that most applications of even modest complexity will most likely have more than one component in their application. If you build each component individually, then you are all set. However, if you have a single build file for the entire application (or for more than one component), this feature won’t cut it. To solve this, you can always use the old tried and true method of adding ANT tasks to your build.xml to create a new component version and push the contents into that version.

    From a forward traceability perspective, RTC build results records also have links that you can add. A common practice is to create a link to the UrbanCode Deploy component that gets created.
  2. UrbanCode Deploy also has a feature that allows you to create links associated with component versions. These links can be URLs to anything you want, but the best use of this feature is to provide a link back to the build entries that produced the component version. In the case of RTC, a simple link to the build result record gives you that backward traceability. If you use another build engine like Jenkins, for example, you can also create another link that points back to the Jenkins job that produced the build.11-20-2014 12-03-08 PM
    And there are REST and command line API calls that allow you go GET and PUT links. This makes the creation of the link something you can do as part of the build step of pushing a new component version to UrbanCode Deploy.

So both RTC and UrbanCode Deploy have the ability to create and maintain links to and from build result records and component versions. But let’s examine a bit closer how these links can be used to enhance the visibility of code as it walks down the release pipeline.

But first a quick aside. RTC has the concept of baselines and UrbanCode Deploy has the concept of snapshots. At some point in time someone needs to lay out some strategies on how these two concepts can be used together. The concept of a baseline in RTC collects together all of the code versions of all the files in all the selected components (RTC component here) into a single entity. This make sense to do at build time. At the same time, UrbanCode Deploy has the concept of a snapshot that essentially does the same thing but at a build output level. There may be additional components in a snapshot that don’t have a corresponding build equivalent, but marrying the two concepts is something I hope the Rational development teams consider. Being able to build all components of an app, take an RTC snapshot of the code that corresponds to that build, push all the components into UrbanCode deploy as new component versions, and create a snapshot of the application with the new component versions seems like a logical thing to do. I am sure there are many scenarios where this simple case breaks down, but the concept is solid. Adding links to more element types in UrbanCode deploy will help make this a reality.

OK, now back to our story. We have the bi-directional traceability between RTC build results and component versions via links on either side. But as the components get deployed to new environments as they move down the release pipeline, there is no indication of that movement in the RTC build record. How can we make this happen? Let’s build a plug-in.

The concept of the plugin is that each time a component version gets deployed to a new environment, let’s put some type of indication in the build result record to indicate that it has been deployed to environment X. This way anyone observing the build result record will know how far into the release pipeline, if at all, the build output has made it. The easiest way to do this is to use the tag field of the build result record. Tags can be anything so let’s add some tags to indicate deployment to environments.

In order to do this programmatically, we need to take advantage of the RTC Java API. For any release of RTC, you can get a zip file of the Java API jars that can be used to write java programs that manipulate RTC elements. There were a few jazz.net articles here and here that I used to help come up with the code I wrote. I also made the plug-in step that adds the tag a two step java plug-in (the plug-in calls a java program that then collects properties and then spawns another java process to do the work). This was for a specific reason. The code that does the work needs the RTC Java API jar files in its classpath. For the latest release of RTC, there are about 100 jar files that make up the classpath list. I didn’t want to include that entire set in the plug-in. Therefore, I ask the user to provide a path to the directory that holds these jar files. I then spawn a second java process that includes this directory in its classpath. This also keeps the plug-in RTC version-independent as long as the Java API for builds doesn’t change that much and it puts the onus on the user to get those jars.

So I created a plug-in step that adds a tag to the list of tags of an RTC build result record. The information needed for this step is relatively small: the RTC URL, a username and password, and the UUID of the build result record, and the tag to add. The java for this took a bit of time but again using existing examples made things much easier.

The other step in the plug-in that I created was getting the RTC build result ID from the component version link. The URL to an RTC build result looks something like this:

hostname:9443/ccm/web/projects/JKE%20Banking%20(Change%20Management)
#action=com.ibm.team.build.viewDefinition&id=_PYarAGq4EeS_krMxQ9smpw

The id fragment at the end is the UUID of the build result record. So the plugin step needs to get the link from the component version, then parse that link to get the build record UUID, and then create an output property to hold that value so that the previously mentioned java step can use it. There is a REST API call that can be used to get the link from a component version. Groovy can then be used easily to parse the full URL and retrieve the id.

So here is a working example. I envision that at the end of a component deployment process you would add these two steps to update the build result record.
11-20-2014 2-10-24 PM

Here is the details for the first step, GetBuildId. It takes the parameters mentioned above. Using component name and version name properties makes this step dynamic. The link name comes from the process of creating the link through an ANT task when building the component. And the output property is simply whatever you want to call it.

11-20-2014 2-12-32 PM

Here are the properties for the second step. The RTC parameters are self explanatory. The build result record UUID is the property that is set by the previous step. And the Java API location is where the user has unzipped the RTC Java API files on the agent server. The tag value also uses the current environment name property so that this step works for any environment.

11-20-2014 2-15-43 PM

Both of these steps work in my environment without a hitch. If you are interested in trying it out, you can get the plug-in source code here in this IBM DevOps Services project. Let me know how it goes.

UrbanCode Deploy and SmartCloud Orchestrator (Part 3)

In Part 1 we connected to the cloud provider and created a resource template from a virtual system pattern. In Part 2 we created an application blueprint from the resource template and mapped the application components to the blueprint. We then created a new application environment from the blueprint and the new virtual system was created. Once the environment is created and the agents come online, you can now execute your normal application deployment process onto the newly provisioned environment.

In Part 3, we will go one step further and explore what it would take to satisfy the ultimate goal of providing a true self-service environment creation mechanism for developers. Exploring this a bit further, let’s take a closer look at the use case.

The promise of cloud is that you can have readily available systems at the drop of a hat, or at least much much faster than ever before. As a developer, I have some new code that I have developed that I want to test in an isolated environment (we will explore some of the subtle but challenging details behind this idea at the end). It would be awesome if I could go to some portal somewhere and request a new environment be provisioned and my application deployed to that environment without any need for knowledge of SCO or UrbanCode Deploy. Well, this capability exists today.

To begin with, SmartCloud Orchestrator has a robust business process engine that allows you to create self-service capabilities with no need to understand what is under the covers. I have no experience in this but have seen thesco results. You can create processes and human tasks that can be executed from the SCO website. You then categorize your various self-serve processes.

The good part about this is that you have at your disposal a full development environment and run-time that can utilize existing programming concepts. Of course we will have to take advantage of the UrbanCode REST API or command line to be able to drive a deployment process.

Before going on, I want to confess that I have not had the opportunity to get this entire flow working from A to Z. I haven’t had access to a SCO environment and enough free time to make this all work. However, I am putting it out there because I believe this is doable.

In order to satisfy our desire to have a fully provisioned environment with an application deployed, we need to setup a process that can do the job. We can use a generic process to get our work done. There is a REST API call that can kick off a generic process and therefore our SCO self-service process can use this to drive this process. In principle, our generic process can look something like this:

genproc2

 

The first step is to provision the environment. This step requires the environment name, the application name, and the blueprint name. These must be passed into this process and therefore you need to make process properties that can be referenced by this step. NOTE: When we provisioned an environment using the UrbanCode Deploy GUI, it asks us for information that the cloud provider needs. I am not sure how that info is passed here. There is a new command line option called getBlueprintNodePropertiesTemplate, and its descriptions says that it “returns a JSON template of the properties required to provision a blueprint.” This would need to be used and populated to insure that all of the information is passed to the environment creation process. You might need to extend the create environment step to interrogate the blueprint, get the necessary properties, and insure they are all populated.If anyone out there has tried this, let me know what you find.

The other challenge we have here is that we need to insure the environment name is unique. There is an option in this step to insure that the environment name is unique and the plug-in step simply appends a random string to the end of the environment name. This poses a problem for the next step.

Step two is to wait for the environment to be provisioned. We need to wait for the resources (agents) that will come online once the provisioned nodes are spun up. If you remember, the agent names will follow a pattern. However, if we allow the previous step to make the environment name unique, we will not be able to predict the agent names. Therefore, our self-service call to this process needs to specify the environment name and insure it is unique.

Secondly, we need to somehow determine how many agents to wait for an their exact names. This will be a challenge and as of right now I am not sure how I would solve it. This would most likely require a new plug-in to be able to interrogate a blueprint, get the names of the agent prototypes, and then construct the list of agents to wait for. Again, some plug-in development is required here.

Once the agents have come up, we can now deploy our application. This step is easy enough and we can call a well known deploy process for the application to do the installation. But there is another challenge here. Deployments can either be done using a snapshot, or you have to specify the versions of each component. Snapshots are easy, but if we remember back to our original idea, a developer has some new code that he/she wants to test. Typically snapshots are not created until a group of components have been tested together. So we have a choice. We can either have the environment provisioned from an existing snapshot, and then have to manually add our own updates to test. Or we have to provide some mechanism to update/copy an existing snapshot to include a baseline plus the new stuff. This could take on lots of possibilities, but a thorough examination of the use case would be required in order to develop a solution. This also may not be the use case we care about.

One additional solution would be to go a much more custom route and write an application that does this instead of relying on existing plug-in steps. The REST API and command line API are very rich and we can ultimately get to and produce all of the information we need. But it is nice to rely on existing capabilities and processes are much more robust and flexible than a custom solution. But as we have seen above, there are enough nuances in this effort that will require some plug-in extensions or creation that it might make sense to go the fully custom application route.

Happy self-service!!! Let me know if anyone takes on this challenge.

UrbanCode Deploy and SmartCloud Orchestrator (Part 2)

Back to our story. In Part 1, we connected to our cloud provider (SCO) and created a resource template from a virtual system pattern. We then organized our resource template and created some resource properties that will no doubt become useful in component processes that will deploy to this pattern.

The next step is to now switch perspectives and put our application hat on. Applications are deployed to environments, and UrbanCode Deploy allows us to create environments from resource templates. But first we have to map our application to the resource template. This is done via an application blueprint. This is a one time exercise for any given application/resource template combination. Many environments can be created from a single application blueprint.

The blueprint creation process from a resource template is very easy. On the blueprints tab of an application, you simply click the Create New Blueprint entry. You simply give the blueprint a name, description, and chose the resource template you want to map to.  The blueprint is created.

You now need to map your application to the blueprint. This involves simply assigning your application components to the agent prototypes. In our example, we have a 3-tiered pattern and we have to map each of our 3 components to the 3 tiers. After mapping, the blueprint looks like this. We have a simple 3-tiered application that will map easily to this pattern.

blueprint

The database component maps to the database tier node. The web service component maps to the application server tier node. And the web component maps to the web tier node. This makes sense. But what if we had different sized virtual system patterns and corresponding resource templates.

For example, what if we had a small single node pattern. Our blueprint mapping would look like this. We have a 3-node app server tier, so we simply map the web services component to all 3 of the nodes in that tier.

big

Or maybe at the other end of the spectrum, we have a small pattern with only a single node. In this case we could create a blueprint that would map all 3 components to the same node.small

Mapping our application to a resource template using a blueprint now gives us the final piece we need to be able to create a new application environment. newenv

The process of creating an environment from a blueprint will cause a new virtual system to get created from the virtual system pattern. You may have to select location in your cloud provider for the nodes to get created. There also may be some properties that the pattern requires for each node (if you have done any SCO work, you know there there are typically many properties values that are needed for the various script packages that you include in a node pattern).

 

 

 

cloudprops2

Once you click save, UrbanCode Deploy kicks off the provisioning of the new nodes that make up the new environment in SCO.  We can now go and check the resources and see what agents we are waiting for.

waiting4resources

As you can see, we have 3 agents that are waiting to come online. The agent names are important. You can see that they follow a pattern:  <application>_<environment>_<agent prototype>.

This is valuable information to have as you will most likely have to get agent properties as part of your deployment process. You can use the Get Agent Property component process step to get an agent’s IP address, for example. When specifying the agent, you would use something like the pattern:

${p:application.name}_${p:environment.name}_${p:resource/ws.node}

Remember in part 1 when I mentioned that I like to capture the agent prototype names as properties in the top level folder of the resource template.  This is where they come in handy.

Once the agents come online (if you put the agent script package at the end of each node in the pattern), then you can be sure that the environment is ready for deployment. You can now execute your normal application deployment process to your newly created environment.

Pretty simple and pretty efficient. In part 3, we will explore how you can create a developer self serve portal where developers can request environments with no interaction with UrbanCode Deploy.

 

 

 

 

UrbanCode Deploy and SmartCloud Orchestrator (part 1)

The promise of self-serve environments is closer than ever, and by combining UrbanCode Deploy and SmartCloud Orchestrator, you can pretty much achieve utopia. So let’s examine the integration.

The job of SmartCloud Orchestrator is to build virtual patterns that are used to spawn virtual systems.  The beauty of this solution is that these patterns can represent the established deployment platforms for various technologies. For example, you can build a pattern that represents the standard web application topology for a given enterprise. This makes it easy for application teams to understand what they will be provided and what they must target for their applications.  It also makes the self-serve environment story achievable.  I simply spin up a pattern and I have an environment ready to go.

But let’s examine in a bit more detail the application stack and the responsibilities of the cloud solution and the application deployment solution.  stackThe job of the cloud solution is to utilize its virtualization environments to create nodes with specified storage, memory, and processors, install the OS on top of the raw node, and also install the various middleware solutions that are necessary. The interesting part is the middleware configuration. It is my contention that the cloud solution should provide a configured middleware solution ready to accept an application. That means that, using a WebSphere example, that WAS is installed on all of the nodes, the necessary profiles are created, the node agents are installed across all nodes and connected back to the deployment manager, and clusters are created (if necessary). This is the value of the orchestration part of SCO. The WAS configuration can do nothing in this state but it is ready to accept an application.  It is now UrbanCode Deploy’s job to install and configure the application onto the resulting virtual system.

There are a few things to keep in mind when creating virtual system patterns.  Work with your pattern architect to insure that these items are accomplished.  First, you need to have the agent installed onto each node. Included in the UrbanCode Deploy plugins zip file is an agent package that can be added to SCO and then added to each node in your pattern. The agent name will be specified later, but the UrbanCode Deploy server and port need to be hard coded into the pattern.  Also, insure that the agent installation is done as close to the end of the node stack as possible. Once the agent is installed and running and comes online in the UrbanCode Deploy server, you can then assume your node is ready to go.  Finally, work with your pattern architect and insure he/she understands the details that are important to you. For example, you may need to know the installation location of your middleware solution.  That location is important to the installation process and it should be made known to the pattern engineer that it can’t be changed without some notification.

So now let’s assume our pattern is ready to go and meets our needs.  We can now integration UrbanCode Deploy with SCO to quickly and easily create environments. Step one with UrbanCode Deploy is to make a connection to our cloud provider.  This is easily done via the Resources tab in UrbanCode Deploy.  Get the credentials from your cloud administrator.  You need a user with permissions to view and instantiate virtual system patterns.

cc

Once you have your cloud connection, you can then create a resource template.  Resource templates is the integration point between SCO and UCD.  A resource template represents the virtual system pattern in UrbanCode Deploy.  The resource template is where you spend your time thinking about how this pattern will be used by applications in UrbanCode Deploy.  rtCreating the initial resource template from SCO is easy.  In the resource template page you have the option to import a resource template from the cloud. Using your existing cloud connection you can interrogate SCO for its list of virtual system patterns. Pick the one you are interested in. Once completed, you get a simple template with each node of the template represented by an agent prototype.

Notice the agent prototype names in the resulting template below.  These names may mean something to a pattern engineer, but don’t mean much to an application architect. This is another area that you can work with your pattern engineer to provide meaningful node names.

rt2

 

Now is when you need to think about your deployment processes that will use this pattern and flesh out the template to that it is usable by applications. The first step will be to better organize your template. Application teams will need to map their application components to the resource template via an application blueprint. Therefore, make it easy on them and create folders that hold the agent prototypes. These folders will serve two purposes. The first being that it will help categorize the nodes and make it clear the purpose of each node. In our example above there are two nodes that look exactly alike, but actually one has a middleware solution installed on it and the other has a database installed. There is no way to tell via their names. Use folders as a way to organize your template, something like this.

rt3

I also added a top level folder in my example here.  I should have named it something better, but when an environment is created from this template, it will become the top level folder in your resource tree.  Name it something valuable.

The other reason for adding a folder structure to your resource template is to be able to include properties as part of your template. This is where you can make your template valuable to applications.  Remember that this single template can be used by many applications. After all, a standard topology should be the default used by all applications of a particular technology type. Put your deployment process designer hat on and think of the properties that will be needed by a deployment process. For example, you may want to expose an install location for a middleware solution. Maybe a URL for the deployment manager of Tomcat, for example. Maybe a port number needs to be exposed. Put those properties for each node in the folder that holds the agent prototype. The resource properties are then readily available to any component process that utilizes this template.

Also at the top level folder (Top in my example), I typically chose to include properties that identify the agent prototype names for all nodes involved in the pattern. It is typical in a multi-node situation where you will need the IP address of the database node, for example, as part of the deployment process to the app server node. By having the agent name readily available as a property, you can easily interrogate the agent properties in a single step to find its IP address property.

In the next installment of this series, we will map an application to this resource template using an application blueprint. Happy deploying!!

 

Continuous Deployment and Databases

Dealing with databases within a continuous deployment strategy can be challenging. Databases do not subscribe to the same build/deploy concepts that applications do.

  1. It is recommended that builds produce the complete and deployable application each and every time. Databases are not re-created or re-deployed each time, only changes are applied.
  2. Rollbacks are easy with applications, simply re-deploy the previous version. Rollbacks are hard with databases and must be taken into account at all times.
  3. Application file names typically stay the same from build to build. Database update file names (SQL scripts) do not necessarily stay the same at all. Each update may produce a unique set of SQL scripts and their names are typically not relevant to any other version.
  4. Initial installations of an application should be the same installation process as an update. Initial installations of a database are usually not scripted and are usually handled outside of the normal deployment process (i.e. the DBAs will handle it!!).

db

And there are no silver bullets or magic dust when it comes to automating database deployments and integrating them into your continuous delivery strategy. Here are some things to think about.

Use an industry solution – This is not a new problem and others have attempted to create solutions to help. Liquibase is one such solution. Solutions like this make the database version-aware. Typically a table is inserted into the database to keep track of each update version, the scripts that got it to that version, and rollback scripts to get it back to the previous version. Using a solution like this really makes continuous deployment to databases more concrete and easy to build into a strategy.

Focused SQL scripts – If you don’t choose to go the route of something like Liquibase, then a more structured and focused effort needs to be put into SQL scripts to insure automation success. Here is a typical situation I run into. A DBA is brought into an UrbanCode Deploy proof-of-concept. They hand over a set of SQL scripts that coincide with an application version (which sometimes is a cause for celebration in itself). However, these SQL scripts are typically run by humans. A human watches the script and its results and proceeds with the next script in the sequence if the first one is successful. First of all, a human is in charge with determining if a script is successful or not. And you cannot rely on the fact that the script errors out to indicate failure. You may need to run a query to get a value from a table and then create something new based on the query results. Or you may need to count the presence of rows in a table to determine how many of something needs to be created. Second, a human determines what script to run next. There may be no indication of script execution order in the script file name and the only one who knows what comes next is the human.

So in order to make database updates automatable(?), you have to put some discipline into your script writing. A script needs to be able to make all the decisions programatically with no human intervention. It should error out if a problem occurs but a successful return status from a script should indicate that all is good. Also, there should be some mechanism setup to be able to programmatically determine the order of scripts to run. Either use a file name pattern or supply a separate file that lists the order of execution. Rollback scripts also need to be provided.

UrbanCode Deploy can handle either approach very well. It has a plug-in to an open source solution similar to Liquibase (the plug-in is the DBUpgrader plug-in). Or you can follow the disciplined approach and UrbanCode Deploy can execute SQL scripts in a prescribed order. Rollback scripts should follow the same pattern and UrbanCode Deploy will have a rollback process for the database component as well.

Including database updates in a continuous deployment strategy is a good thing, but is easier said than done. It requires some forethought and a strategy. Getting the DBAs to the table may be the biggest challenge to overcome 🙂