RTC Build Traceability with UrbanCode Deploy

There have been great developments in the past 12 months or so to improve the traceability between builds managed by RTC and UrbanCode Deploy.

  1. RTC build definitions now have built-in support for post-build deployment steps. If you use the Jazz Build Engine as your build engine, you can specify connection information and component information in the build definition itself to allow the build process to push a new version of the build result into a new component version. This is great and makes it simple and easy to accomplish getting your builds to the deployment engine. See Freddy’s blog entry from about a year ago on this very topic for more details. There are of course some limitations to this feature. It assumes you want to push your build result into a single UrbanCode Deploy component. This is fairly unrealistic in that most applications of even modest complexity will most likely have more than one component in their application. If you build each component individually, then you are all set. However, if you have a single build file for the entire application (or for more than one component), this feature won’t cut it. To solve this, you can always use the old tried and true method of adding ANT tasks to your build.xml to create a new component version and push the contents into that version.

    From a forward traceability perspective, RTC build results records also have links that you can add. A common practice is to create a link to the UrbanCode Deploy component that gets created.
  2. UrbanCode Deploy also has a feature that allows you to create links associated with component versions. These links can be URLs to anything you want, but the best use of this feature is to provide a link back to the build entries that produced the component version. In the case of RTC, a simple link to the build result record gives you that backward traceability. If you use another build engine like Jenkins, for example, you can also create another link that points back to the Jenkins job that produced the build.11-20-2014 12-03-08 PM
    And there are REST and command line API calls that allow you go GET and PUT links. This makes the creation of the link something you can do as part of the build step of pushing a new component version to UrbanCode Deploy.

So both RTC and UrbanCode Deploy have the ability to create and maintain links to and from build result records and component versions. But let’s examine a bit closer how these links can be used to enhance the visibility of code as it walks down the release pipeline.

But first a quick aside. RTC has the concept of baselines and UrbanCode Deploy has the concept of snapshots. At some point in time someone needs to lay out some strategies on how these two concepts can be used together. The concept of a baseline in RTC collects together all of the code versions of all the files in all the selected components (RTC component here) into a single entity. This make sense to do at build time. At the same time, UrbanCode Deploy has the concept of a snapshot that essentially does the same thing but at a build output level. There may be additional components in a snapshot that don’t have a corresponding build equivalent, but marrying the two concepts is something I hope the Rational development teams consider. Being able to build all components of an app, take an RTC snapshot of the code that corresponds to that build, push all the components into UrbanCode deploy as new component versions, and create a snapshot of the application with the new component versions seems like a logical thing to do. I am sure there are many scenarios where this simple case breaks down, but the concept is solid. Adding links to more element types in UrbanCode deploy will help make this a reality.

OK, now back to our story. We have the bi-directional traceability between RTC build results and component versions via links on either side. But as the components get deployed to new environments as they move down the release pipeline, there is no indication of that movement in the RTC build record. How can we make this happen? Let’s build a plug-in.

The concept of the plugin is that each time a component version gets deployed to a new environment, let’s put some type of indication in the build result record to indicate that it has been deployed to environment X. This way anyone observing the build result record will know how far into the release pipeline, if at all, the build output has made it. The easiest way to do this is to use the tag field of the build result record. Tags can be anything so let’s add some tags to indicate deployment to environments.

In order to do this programmatically, we need to take advantage of the RTC Java API. For any release of RTC, you can get a zip file of the Java API jars that can be used to write java programs that manipulate RTC elements. There were a few jazz.net articles here and here that I used to help come up with the code I wrote. I also made the plug-in step that adds the tag a two step java plug-in (the plug-in calls a java program that then collects properties and then spawns another java process to do the work). This was for a specific reason. The code that does the work needs the RTC Java API jar files in its classpath. For the latest release of RTC, there are about 100 jar files that make up the classpath list. I didn’t want to include that entire set in the plug-in. Therefore, I ask the user to provide a path to the directory that holds these jar files. I then spawn a second java process that includes this directory in its classpath. This also keeps the plug-in RTC version-independent as long as the Java API for builds doesn’t change that much and it puts the onus on the user to get those jars.

So I created a plug-in step that adds a tag to the list of tags of an RTC build result record. The information needed for this step is relatively small: the RTC URL, a username and password, and the UUID of the build result record, and the tag to add. The java for this took a bit of time but again using existing examples made things much easier.

The other step in the plug-in that I created was getting the RTC build result ID from the component version link. The URL to an RTC build result looks something like this:

hostname:9443/ccm/web/projects/JKE%20Banking%20(Change%20Management)
#action=com.ibm.team.build.viewDefinition&id=_PYarAGq4EeS_krMxQ9smpw

The id fragment at the end is the UUID of the build result record. So the plugin step needs to get the link from the component version, then parse that link to get the build record UUID, and then create an output property to hold that value so that the previously mentioned java step can use it. There is a REST API call that can be used to get the link from a component version. Groovy can then be used easily to parse the full URL and retrieve the id.

So here is a working example. I envision that at the end of a component deployment process you would add these two steps to update the build result record.
11-20-2014 2-10-24 PM

Here is the details for the first step, GetBuildId. It takes the parameters mentioned above. Using component name and version name properties makes this step dynamic. The link name comes from the process of creating the link through an ANT task when building the component. And the output property is simply whatever you want to call it.

11-20-2014 2-12-32 PM

Here are the properties for the second step. The RTC parameters are self explanatory. The build result record UUID is the property that is set by the previous step. And the Java API location is where the user has unzipped the RTC Java API files on the agent server. The tag value also uses the current environment name property so that this step works for any environment.

11-20-2014 2-15-43 PM

Both of these steps work in my environment without a hitch. If you are interested in trying it out, you can get the plug-in source code here in this IBM DevOps Services project. Let me know how it goes.

Advertisements

UrbanCode Deploy and SmartCloud Orchestrator (Part 3)

In Part 1 we connected to the cloud provider and created a resource template from a virtual system pattern. In Part 2 we created an application blueprint from the resource template and mapped the application components to the blueprint. We then created a new application environment from the blueprint and the new virtual system was created. Once the environment is created and the agents come online, you can now execute your normal application deployment process onto the newly provisioned environment.

In Part 3, we will go one step further and explore what it would take to satisfy the ultimate goal of providing a true self-service environment creation mechanism for developers. Exploring this a bit further, let’s take a closer look at the use case.

The promise of cloud is that you can have readily available systems at the drop of a hat, or at least much much faster than ever before. As a developer, I have some new code that I have developed that I want to test in an isolated environment (we will explore some of the subtle but challenging details behind this idea at the end). It would be awesome if I could go to some portal somewhere and request a new environment be provisioned and my application deployed to that environment without any need for knowledge of SCO or UrbanCode Deploy. Well, this capability exists today.

To begin with, SmartCloud Orchestrator has a robust business process engine that allows you to create self-service capabilities with no need to understand what is under the covers. I have no experience in this but have seen thesco results. You can create processes and human tasks that can be executed from the SCO website. You then categorize your various self-serve processes.

The good part about this is that you have at your disposal a full development environment and run-time that can utilize existing programming concepts. Of course we will have to take advantage of the UrbanCode REST API or command line to be able to drive a deployment process.

Before going on, I want to confess that I have not had the opportunity to get this entire flow working from A to Z. I haven’t had access to a SCO environment and enough free time to make this all work. However, I am putting it out there because I believe this is doable.

In order to satisfy our desire to have a fully provisioned environment with an application deployed, we need to setup a process that can do the job. We can use a generic process to get our work done. There is a REST API call that can kick off a generic process and therefore our SCO self-service process can use this to drive this process. In principle, our generic process can look something like this:

genproc2

 

The first step is to provision the environment. This step requires the environment name, the application name, and the blueprint name. These must be passed into this process and therefore you need to make process properties that can be referenced by this step. NOTE: When we provisioned an environment using the UrbanCode Deploy GUI, it asks us for information that the cloud provider needs. I am not sure how that info is passed here. There is a new command line option called getBlueprintNodePropertiesTemplate, and its descriptions says that it “returns a JSON template of the properties required to provision a blueprint.” This would need to be used and populated to insure that all of the information is passed to the environment creation process. You might need to extend the create environment step to interrogate the blueprint, get the necessary properties, and insure they are all populated.If anyone out there has tried this, let me know what you find.

The other challenge we have here is that we need to insure the environment name is unique. There is an option in this step to insure that the environment name is unique and the plug-in step simply appends a random string to the end of the environment name. This poses a problem for the next step.

Step two is to wait for the environment to be provisioned. We need to wait for the resources (agents) that will come online once the provisioned nodes are spun up. If you remember, the agent names will follow a pattern. However, if we allow the previous step to make the environment name unique, we will not be able to predict the agent names. Therefore, our self-service call to this process needs to specify the environment name and insure it is unique.

Secondly, we need to somehow determine how many agents to wait for an their exact names. This will be a challenge and as of right now I am not sure how I would solve it. This would most likely require a new plug-in to be able to interrogate a blueprint, get the names of the agent prototypes, and then construct the list of agents to wait for. Again, some plug-in development is required here.

Once the agents have come up, we can now deploy our application. This step is easy enough and we can call a well known deploy process for the application to do the installation. But there is another challenge here. Deployments can either be done using a snapshot, or you have to specify the versions of each component. Snapshots are easy, but if we remember back to our original idea, a developer has some new code that he/she wants to test. Typically snapshots are not created until a group of components have been tested together. So we have a choice. We can either have the environment provisioned from an existing snapshot, and then have to manually add our own updates to test. Or we have to provide some mechanism to update/copy an existing snapshot to include a baseline plus the new stuff. This could take on lots of possibilities, but a thorough examination of the use case would be required in order to develop a solution. This also may not be the use case we care about.

One additional solution would be to go a much more custom route and write an application that does this instead of relying on existing plug-in steps. The REST API and command line API are very rich and we can ultimately get to and produce all of the information we need. But it is nice to rely on existing capabilities and processes are much more robust and flexible than a custom solution. But as we have seen above, there are enough nuances in this effort that will require some plug-in extensions or creation that it might make sense to go the fully custom application route.

Happy self-service!!! Let me know if anyone takes on this challenge.

UrbanCode Deploy and SmartCloud Orchestrator (part 1)

The promise of self-serve environments is closer than ever, and by combining UrbanCode Deploy and SmartCloud Orchestrator, you can pretty much achieve utopia. So let’s examine the integration.

The job of SmartCloud Orchestrator is to build virtual patterns that are used to spawn virtual systems.  The beauty of this solution is that these patterns can represent the established deployment platforms for various technologies. For example, you can build a pattern that represents the standard web application topology for a given enterprise. This makes it easy for application teams to understand what they will be provided and what they must target for their applications.  It also makes the self-serve environment story achievable.  I simply spin up a pattern and I have an environment ready to go.

But let’s examine in a bit more detail the application stack and the responsibilities of the cloud solution and the application deployment solution.  stackThe job of the cloud solution is to utilize its virtualization environments to create nodes with specified storage, memory, and processors, install the OS on top of the raw node, and also install the various middleware solutions that are necessary. The interesting part is the middleware configuration. It is my contention that the cloud solution should provide a configured middleware solution ready to accept an application. That means that, using a WebSphere example, that WAS is installed on all of the nodes, the necessary profiles are created, the node agents are installed across all nodes and connected back to the deployment manager, and clusters are created (if necessary). This is the value of the orchestration part of SCO. The WAS configuration can do nothing in this state but it is ready to accept an application.  It is now UrbanCode Deploy’s job to install and configure the application onto the resulting virtual system.

There are a few things to keep in mind when creating virtual system patterns.  Work with your pattern architect to insure that these items are accomplished.  First, you need to have the agent installed onto each node. Included in the UrbanCode Deploy plugins zip file is an agent package that can be added to SCO and then added to each node in your pattern. The agent name will be specified later, but the UrbanCode Deploy server and port need to be hard coded into the pattern.  Also, insure that the agent installation is done as close to the end of the node stack as possible. Once the agent is installed and running and comes online in the UrbanCode Deploy server, you can then assume your node is ready to go.  Finally, work with your pattern architect and insure he/she understands the details that are important to you. For example, you may need to know the installation location of your middleware solution.  That location is important to the installation process and it should be made known to the pattern engineer that it can’t be changed without some notification.

So now let’s assume our pattern is ready to go and meets our needs.  We can now integration UrbanCode Deploy with SCO to quickly and easily create environments. Step one with UrbanCode Deploy is to make a connection to our cloud provider.  This is easily done via the Resources tab in UrbanCode Deploy.  Get the credentials from your cloud administrator.  You need a user with permissions to view and instantiate virtual system patterns.

cc

Once you have your cloud connection, you can then create a resource template.  Resource templates is the integration point between SCO and UCD.  A resource template represents the virtual system pattern in UrbanCode Deploy.  The resource template is where you spend your time thinking about how this pattern will be used by applications in UrbanCode Deploy.  rtCreating the initial resource template from SCO is easy.  In the resource template page you have the option to import a resource template from the cloud. Using your existing cloud connection you can interrogate SCO for its list of virtual system patterns. Pick the one you are interested in. Once completed, you get a simple template with each node of the template represented by an agent prototype.

Notice the agent prototype names in the resulting template below.  These names may mean something to a pattern engineer, but don’t mean much to an application architect. This is another area that you can work with your pattern engineer to provide meaningful node names.

rt2

 

Now is when you need to think about your deployment processes that will use this pattern and flesh out the template to that it is usable by applications. The first step will be to better organize your template. Application teams will need to map their application components to the resource template via an application blueprint. Therefore, make it easy on them and create folders that hold the agent prototypes. These folders will serve two purposes. The first being that it will help categorize the nodes and make it clear the purpose of each node. In our example above there are two nodes that look exactly alike, but actually one has a middleware solution installed on it and the other has a database installed. There is no way to tell via their names. Use folders as a way to organize your template, something like this.

rt3

I also added a top level folder in my example here.  I should have named it something better, but when an environment is created from this template, it will become the top level folder in your resource tree.  Name it something valuable.

The other reason for adding a folder structure to your resource template is to be able to include properties as part of your template. This is where you can make your template valuable to applications.  Remember that this single template can be used by many applications. After all, a standard topology should be the default used by all applications of a particular technology type. Put your deployment process designer hat on and think of the properties that will be needed by a deployment process. For example, you may want to expose an install location for a middleware solution. Maybe a URL for the deployment manager of Tomcat, for example. Maybe a port number needs to be exposed. Put those properties for each node in the folder that holds the agent prototype. The resource properties are then readily available to any component process that utilizes this template.

Also at the top level folder (Top in my example), I typically chose to include properties that identify the agent prototype names for all nodes involved in the pattern. It is typical in a multi-node situation where you will need the IP address of the database node, for example, as part of the deployment process to the app server node. By having the agent name readily available as a property, you can easily interrogate the agent properties in a single step to find its IP address property.

In the next installment of this series, we will map an application to this resource template using an application blueprint. Happy deploying!!

 

Platform as a Service – Built-in DevOps

I like to keep myself in tune with what is going on in world with all things DevOps, so I frequent a few places (the LinkedIn DevOps group, DevOps.com, etc.).  There are lots of good discussions and topics out there.  These types of fast moving sites are a must to keep up with the world.  From a technical standpoint the topics usually center around the various tools and techniques involved in automation.   There is no arguing the fact that many shops out there that are embracing DevOps start at the low technical level and work their way up.  I call this Startup DevOps (I doubt I can take credit for this term).  Most startups have very smart people and very little bureaucracy to cut through.  Get the job done faster and everyone is happy.   Using tools like Chef, Puppet, Vagrant, Glu, Jenkins, GIT, RunDeck, Fabric, Capistrano, CFEngine, yada yada yada you can get the job done.  You can craft a very significant and powerful set of automation at very little cost (open source) and provide the fast moving infrastructure to handle the fast moving pace of startups.

Being from IBM, I tend to look at things a bit differently.  Most of the customers I deal with are at the other end of the spectrum.  With IT departments having staffs in the many thousands, there is bureaucracy at every turn.  Large enterprises like this tend to spend money with IBM (and others like us) to transfer risk.  Spend umpteen million with IBM and you have to only look in one direction to point the finger.  So IBM tends to create products that cater to these types of clients.  I use the term Enterprise DevOps for this situation (again, can’t take credit for the term).

IBM is spending billions (yes with a B) on solutions that cater to these types of customers.  Cloud solutions is where the bulk of the effort is focused these days.  IBM offers quite a bit of choice here.  If you want private cloud, IBM has Pure Application Systems and SmartCloud Orchestrator that provide the Infrastructure as a Server (IaaS) capabilities.  Managing Servers, Storage, and Network in an incredibly flexible way is what this is all about.  IBM also has a public cloud offering in Soft Layer.  Let IBM manage your infrastructure and you don’t need a data center anymore.  Nice.

Platform as a Service (PaaS) is the next big thing.  IBM is now introducing the ability to assemble a platform dynamically and provide all of the plumbing in connecting those platform pieces in an automated way.  We have even connected our DevOps in the Cloud solution (JazzHub) with the IBM PaaS solution (BlueMix) in a way that offers a true cloud-based development environment that will automatically deploy to your PaaS infrastructure all without lifting a finger.  By the way, take a look at this short YouTube video to get a quick overview of the landscape.

Let’s take a bit closer look at BlueMix and JazzHub and see what I mean.  First, BlueMix allows you to create an infrastructure by assembling services.  You can start with some boilerplate templates that have already wired together infrastructure and services.  For example, the Java + DB Web Starter gives you a WAS Liberty Profile server and a DB2 database, all installed and ready to go.  This boilerplate gives you a sample application that runs as soon as  you server starts.  You get a zip of the source code (we will visit this again later).

bluemix1

Or you can build up your own infrastructure.  First, choose from a list of runtimes.

Bluemix2

And then add services to you infrastructure.

Bluemix3

In my case after a few clicks and less than a minute later I had a server with WAS Liberty and DB2 deployed and running the sample application.  I didn’t need a sysadmin to build me a server.  I didn’t need a DB administrator to install DB2 and create a database for me.  I didn’t need accounts created or ports opened.  All done seamlessly under the covers.  Point and click infrastructure assembly.  DevOps to the max.

But we need to develop our application (or enhance the boilerplate app), so we need a development environment. IBM offers JazzHub, a cloud-based development infrastructure.  JazzHub allows you to create a project that provides change management and source config management already setup and ready to go.

First, pick you source code management solution, Jazz or GIT.

jazzhub1

Next, add some additional services, like auto-deploy to a BlueMix infrastructure.

And we have a project all set to go.  I can invite others to join my project and we can develop in the cloud as a team.  Here I have loaded the sample application source code into my JazzHub project.  I can modify the code right here if I want and push that code into my GIT master branch.

jazzhub3

Or better yet, I can use Eclipse to develop my application using an IDE.  I have connected to my GIT repository and pulled the code down into my workspace.  I can use the GIT plugin to commit changes I have made to the GIT repository.

eclipse1

 

And to tidy things up nicely, by turning on auto-deploy in my JazzHub project, every new push to my GIT repository by my team causes an immediate deployment to my BlueMix infrastructure.

jazzhub4

Holy continuous delivery.  There is an awful lot of things going on under the covers here.  But like I said above, you are offloading risk to you PaaS solution.  The interesting thing is that the price is relatively not that big.  With subscription type pricing you get this solution relatively cheap.  (Note: I am not in sales so don’t ask me for a pricing quote).   Customers now have a choice in pursuing their DevOps goals.  You can build from within by hiring smart people that have experience in the myriad of ever-changing open source DevOps tools, automate as much of the infrastructure creation and platform connectivity on your own, and hope that your smart people don’t get hit by a bus.  Or you can subscribe to a PaaS solution like this one (or others out there) and to steal a Greyhound slogan, “leave the driving to us.”

I made this sound very simple and we know that there are lots of factors involved in determining the direction you go.  Some industries have a hard time with anything located outside of their walls due to regulatory issues or simply a fear of lack of control.  Some of the PaaS solutions will have on-premises options to allow you to bring the solution into your data center but your users won’t know the difference.  We all know that simple projects like this are not always the case.  The complex project portfolio of a large IT organization may require complex infrastructure that a PaaS solution cannot support.  But we are getting closer and closer to PaaS being a reality and I find it hard to believe that this isn’t a viable solution for a good portion of any typical IT application portfolio.

Everything is a Resource – Resource Templates

UrbanCode Deploy 6.0 introduced the concept of a resource tree.  It takes some getting used to, but overall it gives a nice OPs-centric view of the landscape of things.   But buried in the resource topic is the little known yet powerful concept of resource templates.  Let’s walk through the process of creating and using one.

Note:  My examples below use UrbanCode Deploy 6.0.  I am hoping things look a bit better in 6.0.1 from a user interface standpoint.

First, let’s create a new template.  On the Resources main tab, click on the Resource Templates sub-tab.  Click the Create New Sample link.  You will also notice that you can create a new resource template by connecting to a Cloud provider.  This I believe was the original reason for this feature.  Cloud patterns essentially define resource templates.  So by connecting to a Cloud provider, UrbanCode Deploy creates a resource template from the Cloud pattern.  Luckily for us, they also generalized that feature and let us create resource templates from scratch.

resource template

In this case, we are going to create a new 3 tier topology resource template that can be used to deploy a 3 tiered application (ok its just made up but good for an example).  Once I click Save, I get to define my template.  Using the Action menu on the base resource that gets created, we can create a series of sub-resources to represent the tiers.

rt2

And finally, we can add Agent Prototypes to each sub-resource as placeholders for real agents.

Note:  You will notice that you can add a component to an agent prototype.  Why in the world would you want to do that?  In the rare case where you may have some generic component that should be applied to every instance of this template, you can define it here.

rt3

We now have our completed template.  This template is now available as a basis for an application environment.  But first we need to create an Application Blueprint, which inserts this application template into a location in an application’s resource tree.  Moving over to the Application Main tab, selecting our application (JPetStore) and finally the Blueprints sub-tab, we can create our new blueprint.  During the creation process, we select the resource template that we want to use, which is the one we just created.

Once the blueprint is created, we can again use the Action menu for each agent prototype and assign the component from our application to the agent prototype.  This process is now mapping our application to an existing resource template, as shown below.

rt5

Now that we have our application mapped to the resource template in the blueprint, we can create a new application environment from the blueprint.  Back to the JPetStore application environments page, we can create a new environment.

rt6

We give the environment a name, chose the blueprint we just created, and select the base resource where we want to insert this resource template.  This base resource is key and depends on how you have organized your resource tree.  If you organized things well, you can insert this new resource along side the other resource nodes that define other environments for this application.

When you click save, you get an error (that is horribly named) but it helps you to know that you have a step to perform yet.  We have to assign real agents to where we had agent prototypes.  Click on the newly created environment and we see its resource tree.  We can assign a real agent to each node in our tree using the Action menu.

rt7

We now have a new environment with real agents assigned to our components, ready for deploy.  Well, almost.  Can you think of what might yet have to be defined?  How about environment properties?  Those will need to be defined if needed.  But once you do that, deploy away.

The Broad Brush of DevOps

Having been in the software development business for 25 years now, it is truly amazing to see how many old ideas are brushed off, polished up, and re-branded as something new.   Or taking a “new” concept that begins to take off and broadly applying it to other things.  DevOps is great example.

liftarn_Paint_can_with_brushDevOps, at its foundation, is a set of guiding principles that you can apply to the collective SDLC (software development lifecycle) to achieve some real benefits.  At its roots, DevOps is all about breaking down the silos between development and operations (see Systems Thinking in The Phoenix Project).  All good.  Not rocket science, but something worth striving for.  But how new are the concepts, really?  One of the websites I frequent (DevOps.com) uses the phrase “helping finish what agile development started” as a subtitle.  I love that sentiment.  Agile development provided new ways of looking at software development to achieve better results. Much to the contrary of some managers I have had, agile is not rocket science.  The biggest hurdle for some to get over was simply that it was different.  DevOps is again doing the same thing.

In today’s DevOps “market”, you can most likely find tools and processes across the SDLC that have been re-branded as part of a DevOps solution.  IBM is a master of this and rightly so.  Re-branding is a tried and true way of re-inventing offerings to make them appealing to a new audience.  But let’s not get caught up in the hype.   Implementing a change management solution is still a good thing to do, regardless of whether it falls under the DevOps umbrella or not.  Automating builds has always been and will continue to be a good thing, even when the next buzz-word craze comes along.

The great thing about this DevOps wave to me is that it has simply refocused the spotlight on some areas of the SDLC that have traditionally been under the radar and under appreciated.  There were always the guys that could make it happen, that wrote and maintained the magic scripts, and that utilized special skills that no one else had.  The Phoenix Project helped to highlight the fact that these guys can be heroes and also the bad guys when it comes to mission critical deployments. How to properly utilize “Brent” to take advantage of his knowledge but not to make him a bottleneck is an important lesson to learn.  Every organization has a few Brents, and the DevOps wave (and thanks to the book) have helped to elevate the need to capture Brent’s knowledge into repeatable automation.  A continuous delivery solution like IBM UrbanCode Deploy can make many Brents who are available 24/7.

So be prepared for the onslaught of marketing campaigns that now re-brand every software development tool and process to be “Your DevOps Solution”.  But speaking as a former “Brent”, I am glad to see the Brents of the world being in the spotlight.

The uDeploy REST API

If you use IBM Urbancode Deploy (as uDeploy is now called) at all, you will notice its simplicity.  And rightly so as a deployment automation tool should not re-invent the way deployment automation is done.  Urbancode Deploy simply organizes and collects deployment processes and steps.  Once a deployment is successful, any good deployment automation solution should be able to repeat that deployment over and over again with no trouble.

On the other hand, the integrations with other tools, both as a consumer and a producer, provide the real value.  And a valuable part of Urbancode Deploy’s integration capabilities is its REST API.  There are 3 ways to get information about the API.

1) The documentation – this is unfortunately not your best bet.  It is out of date and lacks a lot of necessary detail.

2) The Application WADL file – it does exists in the uDeploy server folder structure.  But it is hard to decipher and also leaves out the json details.

3)  Browser development tools – This is the method that has been the most successful for me.  I use Chrome and its developer tools allow you to see the network traffic occuring as  you navigate the uDeploy web pages.  The uDeploy user interface heavily utilizes the REST API.  But capturing the network traffic as you navigate, you can see the specific rest calls that are occuring and examine the json payload both in and out.

Image

I sought to build something that exercises the uDeploy REST API.  What I came up with is an example of how you can solve a common uDeploy requirement.  The process of onboarding an application to uDeploy involves many mouse clicks and an understanding of how to navigate the user interface.  As I said above, the concepts are easy once you understand them, but to onboard thousands of applications does not scale if you use the user interface.  Plus you must train people on how to get their applications into uDeploy.

So I built a small website that captures some basic information about an application, its components, and its environments, and does the bulk of the setup work in uDeploy via the REST APIs.   I wanted to be able to capture the information in a way that a development team would understand yet not need to know anything about uDeploy and its structure.

You can see a demo of this sample application at this link:  http://www.youtube.com/watch?v=qr3bdCJykEk.

10-3-2013 2-53-54 PM

I would welcome any feedback.

Thanks.