Innovate Here We Come

I have been so busy lately and with Innovate next week I have had even less time to post. But if you are lucky enough to go to Innovate, here are a few items you should check out.

1.  Plugin development Birds-of-a-Feather – Come to my plugin development BoF on Monday and let’s talk plugins.  I am heading up the effort to get the word out on plugin development for UrbanCode Deploy.

2.  Check out the new announcements in the DevOps space.  Some pretty cool stuff coming out.

Look me up if you are there.

See you in Orlando.

Advertisements

Continuous Deployment and Databases

Dealing with databases within a continuous deployment strategy can be challenging. Databases do not subscribe to the same build/deploy concepts that applications do.

  1. It is recommended that builds produce the complete and deployable application each and every time. Databases are not re-created or re-deployed each time, only changes are applied.
  2. Rollbacks are easy with applications, simply re-deploy the previous version. Rollbacks are hard with databases and must be taken into account at all times.
  3. Application file names typically stay the same from build to build. Database update file names (SQL scripts) do not necessarily stay the same at all. Each update may produce a unique set of SQL scripts and their names are typically not relevant to any other version.
  4. Initial installations of an application should be the same installation process as an update. Initial installations of a database are usually not scripted and are usually handled outside of the normal deployment process (i.e. the DBAs will handle it!!).

db

And there are no silver bullets or magic dust when it comes to automating database deployments and integrating them into your continuous delivery strategy. Here are some things to think about.

Use an industry solution – This is not a new problem and others have attempted to create solutions to help. Liquibase is one such solution. Solutions like this make the database version-aware. Typically a table is inserted into the database to keep track of each update version, the scripts that got it to that version, and rollback scripts to get it back to the previous version. Using a solution like this really makes continuous deployment to databases more concrete and easy to build into a strategy.

Focused SQL scripts – If you don’t choose to go the route of something like Liquibase, then a more structured and focused effort needs to be put into SQL scripts to insure automation success. Here is a typical situation I run into. A DBA is brought into an UrbanCode Deploy proof-of-concept. They hand over a set of SQL scripts that coincide with an application version (which sometimes is a cause for celebration in itself). However, these SQL scripts are typically run by humans. A human watches the script and its results and proceeds with the next script in the sequence if the first one is successful. First of all, a human is in charge with determining if a script is successful or not. And you cannot rely on the fact that the script errors out to indicate failure. You may need to run a query to get a value from a table and then create something new based on the query results. Or you may need to count the presence of rows in a table to determine how many of something needs to be created. Second, a human determines what script to run next. There may be no indication of script execution order in the script file name and the only one who knows what comes next is the human.

So in order to make database updates automatable(?), you have to put some discipline into your script writing. A script needs to be able to make all the decisions programatically with no human intervention. It should error out if a problem occurs but a successful return status from a script should indicate that all is good. Also, there should be some mechanism setup to be able to programmatically determine the order of scripts to run. Either use a file name pattern or supply a separate file that lists the order of execution. Rollback scripts also need to be provided.

UrbanCode Deploy can handle either approach very well. It has a plug-in to an open source solution similar to Liquibase (the plug-in is the DBUpgrader plug-in). Or you can follow the disciplined approach and UrbanCode Deploy can execute SQL scripts in a prescribed order. Rollback scripts should follow the same pattern and UrbanCode Deploy will have a rollback process for the database component as well.

Including database updates in a continuous deployment strategy is a good thing, but is easier said than done. It requires some forethought and a strategy. Getting the DBAs to the table may be the biggest challenge to overcome 🙂

New UrbanCode Deploy Plug-in Capabilities – Auto-Discovery

A few new plug-in capabilities were added in UrbanCode Deploy 6.0.1 that have gone relatively unnoticed.  I would like to take the opportunity to highlight them in a few posts.

First, the ability for an agent to auto-discover things is a new plug-in capability. Let’s explore how it works. The purpose of this feature is to get a jump start on configuring your resources and agent/machine properties by having an agent proactively look for the existence of things on the machine it is running on. For example, if you are a WebSphere shop you most likely make extensive use of the WebSphere plug-in.  And part of that process involves capturing the location of WebSphere on a given server. The WebSphere plug-in has that capability today.

If you look at the WebSphere plug-in, you will see two unique steps in the plug-in. One is called WebSphere Discovery. You will notice if  you open the plugin.xml file that the step has a new element in its definition:

<server:type>AUTO_DISCOVERY</server:type>

This type of step causes special behavior to occur when a new agent is added to a resource in the resource tree.  Every auto-discovery step in every plug-in gets run by the agent when this occurs.  For the WebSphere plug-in, this step looks for the existence of WebSphere by searching for standard WebSphere installation locations.  If it finds it, it creates a sub-resource to represent the WebSphere cell and sets a role property on that resource defining the path to WebSphere found on that machine.

Like I said earlier, auto-discovery steps are automatically run when a new agent resource is defined in the resource tree.  Auto-configure steps are manually executed on specific resources that have auto-configure roles applied to them.

At a minimum this save you some typing.  But imaging if you have hundreds or thousands of WebSphere servers. This helps to insure that you always have the right WebSphere information for that server.

But there is more you could automatically learn about a WebSphere box.  There are WebSphere nodes, server, etc. that could also be discovered.  Or maybe you want to create some things on WebSphere once you know WebSphere is there.  The other feature is called auto-configure.

Again, there is an auto-configure step in the WebSphere plug-in.

<server:role>WebSphereCell</server:role>
<server:type>AUTO_CONFIGURE</server:type>

The additional step element indicates why type of resource role this auto-configure step can be run on.  In the WebSphere case, the previous auto-discovery step identified WebSphere and created a sub-resource identifying the WebSphere cell.  You can now run auto-configure on this WebSphere cell resource.  However, before you do, you must provide some additional information to the WebSphereCell resource, namely the WebSphere username and password.  The auto-configure step then goes out and discovers the full WebSphere architecture and creates the necessary sub-resources to capture that architecture.

The last thing that is present in the plugin.xml file are the properties associated with any of the resource roles that your auto-discover or auto-configure steps will create.  These property groups define the properties that will automatically get applied to resource roles when they are created.

In my next post we will talk about the ability to include processes and templates with your plug-in.

Platform as a Service – Built-in DevOps

I like to keep myself in tune with what is going on in world with all things DevOps, so I frequent a few places (the LinkedIn DevOps group, DevOps.com, etc.).  There are lots of good discussions and topics out there.  These types of fast moving sites are a must to keep up with the world.  From a technical standpoint the topics usually center around the various tools and techniques involved in automation.   There is no arguing the fact that many shops out there that are embracing DevOps start at the low technical level and work their way up.  I call this Startup DevOps (I doubt I can take credit for this term).  Most startups have very smart people and very little bureaucracy to cut through.  Get the job done faster and everyone is happy.   Using tools like Chef, Puppet, Vagrant, Glu, Jenkins, GIT, RunDeck, Fabric, Capistrano, CFEngine, yada yada yada you can get the job done.  You can craft a very significant and powerful set of automation at very little cost (open source) and provide the fast moving infrastructure to handle the fast moving pace of startups.

Being from IBM, I tend to look at things a bit differently.  Most of the customers I deal with are at the other end of the spectrum.  With IT departments having staffs in the many thousands, there is bureaucracy at every turn.  Large enterprises like this tend to spend money with IBM (and others like us) to transfer risk.  Spend umpteen million with IBM and you have to only look in one direction to point the finger.  So IBM tends to create products that cater to these types of clients.  I use the term Enterprise DevOps for this situation (again, can’t take credit for the term).

IBM is spending billions (yes with a B) on solutions that cater to these types of customers.  Cloud solutions is where the bulk of the effort is focused these days.  IBM offers quite a bit of choice here.  If you want private cloud, IBM has Pure Application Systems and SmartCloud Orchestrator that provide the Infrastructure as a Server (IaaS) capabilities.  Managing Servers, Storage, and Network in an incredibly flexible way is what this is all about.  IBM also has a public cloud offering in Soft Layer.  Let IBM manage your infrastructure and you don’t need a data center anymore.  Nice.

Platform as a Service (PaaS) is the next big thing.  IBM is now introducing the ability to assemble a platform dynamically and provide all of the plumbing in connecting those platform pieces in an automated way.  We have even connected our DevOps in the Cloud solution (JazzHub) with the IBM PaaS solution (BlueMix) in a way that offers a true cloud-based development environment that will automatically deploy to your PaaS infrastructure all without lifting a finger.  By the way, take a look at this short YouTube video to get a quick overview of the landscape.

Let’s take a bit closer look at BlueMix and JazzHub and see what I mean.  First, BlueMix allows you to create an infrastructure by assembling services.  You can start with some boilerplate templates that have already wired together infrastructure and services.  For example, the Java + DB Web Starter gives you a WAS Liberty Profile server and a DB2 database, all installed and ready to go.  This boilerplate gives you a sample application that runs as soon as  you server starts.  You get a zip of the source code (we will visit this again later).

bluemix1

Or you can build up your own infrastructure.  First, choose from a list of runtimes.

Bluemix2

And then add services to you infrastructure.

Bluemix3

In my case after a few clicks and less than a minute later I had a server with WAS Liberty and DB2 deployed and running the sample application.  I didn’t need a sysadmin to build me a server.  I didn’t need a DB administrator to install DB2 and create a database for me.  I didn’t need accounts created or ports opened.  All done seamlessly under the covers.  Point and click infrastructure assembly.  DevOps to the max.

But we need to develop our application (or enhance the boilerplate app), so we need a development environment. IBM offers JazzHub, a cloud-based development infrastructure.  JazzHub allows you to create a project that provides change management and source config management already setup and ready to go.

First, pick you source code management solution, Jazz or GIT.

jazzhub1

Next, add some additional services, like auto-deploy to a BlueMix infrastructure.

And we have a project all set to go.  I can invite others to join my project and we can develop in the cloud as a team.  Here I have loaded the sample application source code into my JazzHub project.  I can modify the code right here if I want and push that code into my GIT master branch.

jazzhub3

Or better yet, I can use Eclipse to develop my application using an IDE.  I have connected to my GIT repository and pulled the code down into my workspace.  I can use the GIT plugin to commit changes I have made to the GIT repository.

eclipse1

 

And to tidy things up nicely, by turning on auto-deploy in my JazzHub project, every new push to my GIT repository by my team causes an immediate deployment to my BlueMix infrastructure.

jazzhub4

Holy continuous delivery.  There is an awful lot of things going on under the covers here.  But like I said above, you are offloading risk to you PaaS solution.  The interesting thing is that the price is relatively not that big.  With subscription type pricing you get this solution relatively cheap.  (Note: I am not in sales so don’t ask me for a pricing quote).   Customers now have a choice in pursuing their DevOps goals.  You can build from within by hiring smart people that have experience in the myriad of ever-changing open source DevOps tools, automate as much of the infrastructure creation and platform connectivity on your own, and hope that your smart people don’t get hit by a bus.  Or you can subscribe to a PaaS solution like this one (or others out there) and to steal a Greyhound slogan, “leave the driving to us.”

I made this sound very simple and we know that there are lots of factors involved in determining the direction you go.  Some industries have a hard time with anything located outside of their walls due to regulatory issues or simply a fear of lack of control.  Some of the PaaS solutions will have on-premises options to allow you to bring the solution into your data center but your users won’t know the difference.  We all know that simple projects like this are not always the case.  The complex project portfolio of a large IT organization may require complex infrastructure that a PaaS solution cannot support.  But we are getting closer and closer to PaaS being a reality and I find it hard to believe that this isn’t a viable solution for a good portion of any typical IT application portfolio.

UrbanCode Deploy and WebSphere – an Ever Growing Relationship

My experience with WebSphere Application Server started when IBM acquired Rational Software some 11(?) years ago.  WebSphere Studio Application Developer fell under the Rational umbrella and I was one that tried to tackle its capabilities.  And for the longest time dealing with WebSphere Application Server meant no more than “right-mouse-click -> Run on Server.”

I will admit that I am not a WAS admin.  It is a career unto itself and those that are good at it are worth keeping.  But over the years Rational continued to challenge me in the WebSphere arena with the release of Rational Automation Framework for WebSphere (RAFW).  RAFW codified 100s of common WebSphere “gestures” that could be rolled together into hard working jobs that did much of the heavy lifting in managing and maintaining WebSphere Application Servers.  It also had the ability to auto-discover WebSphere configurations and push those configurations to other servers that even may be running different versions of WAS.

Then along comes UrbanCode Deploy and we now have a deployment engine that could sure use some if not all of the capabilities of RAFW.  We are now beginning to see some being rolled into UrbanCode Deploy.  This post is the first in a series examining the Middleware Configuration for WebSphere Application Server (MCWAS) plugin for UrbanCode Deploy.

To begin, there are actually two plugins that come into play.  The MCWAS plugin has commands that are all about capturing, comparing, and using WebSphere configurations.  Some of the RAFW capabilities are captured in the MCWAS plugin (but not all. Stay tuned to future UCD releases for improvements).  A pdf file also exists in the plugin zip file that has details about how to accomplish what we are doing in this post.  The Application Deployment for WebSphere plugin contains 64 commands (as of version 71) that do the bulk of the work in updating and tweaking WebSphere configuration items.  The number is not even close to what RAFW supported, but it is getting there. By the way, you can always get up-to-date plugins from IBM here.  Both of these plugins need to be installed into your UCD server.

Next, let’s examine my test environment.  First, I have a UCD server and agent running on my laptop.  And like I said before, I am not a WAS admin, so I simply have WAS 8.5 running on my laptop with the Plants By WebSphere sample application installed and running.  I can follow directions so I got that app up and running pretty quickly.  Our use case is that I want to capture the configuration of that WAS instance and re-use it to deploy to another server.  We will use the MCWAS capabilities in UCD to do that.

1.  First we need to do some setup in UCD.  We need to setup our agent so that it can auto-discover WebSphere.  If WebSphere is installed in its default location, we don’t have to do anything.  If not, then we need to make sure UCD knows how to find WebSphere.  We add a wsadmin.path property to the agent with its value the path to the wsadmin.bat (or wsadmin.sh) file.  Also, we need a MCWAS component that will be used to hold the configuration.  I created a new component and based it on the Middleware Configuration for WebSphere template.  This component is the center of the deployment process.

2.  Next we create our resources.  You are free to create your own resource tree to represent whatever environment you have, but mine is simple.

one

3.  Next we add the agent to the resource.  If we wait for a few seconds and refresh, we see that we our agent has auto-discovered WebSphere and we get an additional resource below that agent that represents our WebSphere cell.  Magic!!!  We didn’t have to do anything to tell UCD, it just went out and found it for us.  Cool.

two

4.  The cell resource also has a bunch of role properties defined.  We need to fill those in to help UCD auto-discover what is underneath that cell.  If we look at the properties of the WebSphereCell resource, we see this list.  Actually in my case all I need to fill in is the WebSphere profile path.  This tells the UCD which profile I am interested in (there is a way to auto-discover multiple cells but we will keep it simple here).  We can leave the rest blank, especially the cell name.  We will have auto-discovery get that for us.

four

5.  Back to the resource tree, I can now tell UCD to auto-configure the cell.  In the Actions menu of the cell resource, select Auto Configure.

five

6.  A dialog pops up asking us to specify the auto-configure steps.  We don’t have any steps configured yet, so click on the “No auto configure steps for resource” link (yes it is a link.  UCD needs a bit of user interface improvement here).

six

7.  A new dialog is presented that let’s us pick an auto-configure step.  Lo and behold there is currently only 1 choice.  Select WebSphere Topology Discovery and click OK.

seven

8.  Back to the Auto Configure dialog, we now have 1 step defined.  Click Save.

eight

9.  If we wait a few seconds (in my case it was about 15.  I am sure in more complicated environments it might be a minute or so) and refresh our resources, more magic happens!!!  UCD has now interrogated our WAS cell and found the node name and server name below it.  You can also go back into the cell properties and see that the cell name property has been filled in.

nine

This structure should represent your true WAS configuration.  For those of you out there that are real WAS admins and have some real servers running with real configurations, try this and see if UCD gets it right.

Our next post in this series will now take advantage of what we have auto-discovered and use it in a deployment.  Stay tuned.

Quick Post – UrbanCode Deploy REST API and the pesky SSL certificate

Just a quick post for those that are playing with the UrbanCode Deploy REST API.  If you are building a Java solution that utilizes the UrbanCode Deploy REST API, you no doubt have run into the SSL certificate issue.   The default installation for UrbanCode Deploy utilizes a secure socket layer protocol and therefore you have to deal with SSL and its certificate.  Most browsers deal with this and present a warning allowing you to pass at your own risk.

cert

We are all used to this Chrome warning.  You would be surprised at how many internal RTC and UCD servers within IBM don’t bother installing a real certificate 🙂

However, when you are writing Java code that is accessing the REST API, the Java JVM is not as forgiving.  You have to deal with it.  Well, after going to the source of all worldly knowledge, Google, I found an easy solution.  I use Rational Software Architect as my Eclipse IDE, but obviously this will work with any Eclipse-based IDE (any JVM-based IDE for that matter).  Go and get this file.  Create a Java project and add this Java file to it.

In my version, I commented out the section that determines the keystore location (File file = new File…) with a single line that points to my jre that is part of RSA.

File file = new File(“C:/Program Files (x86)/IBM/SDP/jdk/jre/lib/security/cacerts”);

You then simply run the Java program passing the URL (https://<hostname&gt;:8443) of your UrbanCode Deploy server.  Apparently the code attempts to connect and fails, but as part of the SSL handshake process it grabs the server certificate and puts it into the keystore for your JRE.  Not sure exactly how it works but it worked for me so I didn’t dig deeper.

The next time you attempt to run some Java code that access an UrbanCode Deploy REST API from your IDE, you should not have anymore SSL certificate issues getting in your way.   Of course you still have to deal with the certificate “for real” when it comes to production time, but this should get you through the debugging stage.

Everything is a Resource – Resource Templates

UrbanCode Deploy 6.0 introduced the concept of a resource tree.  It takes some getting used to, but overall it gives a nice OPs-centric view of the landscape of things.   But buried in the resource topic is the little known yet powerful concept of resource templates.  Let’s walk through the process of creating and using one.

Note:  My examples below use UrbanCode Deploy 6.0.  I am hoping things look a bit better in 6.0.1 from a user interface standpoint.

First, let’s create a new template.  On the Resources main tab, click on the Resource Templates sub-tab.  Click the Create New Sample link.  You will also notice that you can create a new resource template by connecting to a Cloud provider.  This I believe was the original reason for this feature.  Cloud patterns essentially define resource templates.  So by connecting to a Cloud provider, UrbanCode Deploy creates a resource template from the Cloud pattern.  Luckily for us, they also generalized that feature and let us create resource templates from scratch.

resource template

In this case, we are going to create a new 3 tier topology resource template that can be used to deploy a 3 tiered application (ok its just made up but good for an example).  Once I click Save, I get to define my template.  Using the Action menu on the base resource that gets created, we can create a series of sub-resources to represent the tiers.

rt2

And finally, we can add Agent Prototypes to each sub-resource as placeholders for real agents.

Note:  You will notice that you can add a component to an agent prototype.  Why in the world would you want to do that?  In the rare case where you may have some generic component that should be applied to every instance of this template, you can define it here.

rt3

We now have our completed template.  This template is now available as a basis for an application environment.  But first we need to create an Application Blueprint, which inserts this application template into a location in an application’s resource tree.  Moving over to the Application Main tab, selecting our application (JPetStore) and finally the Blueprints sub-tab, we can create our new blueprint.  During the creation process, we select the resource template that we want to use, which is the one we just created.

Once the blueprint is created, we can again use the Action menu for each agent prototype and assign the component from our application to the agent prototype.  This process is now mapping our application to an existing resource template, as shown below.

rt5

Now that we have our application mapped to the resource template in the blueprint, we can create a new application environment from the blueprint.  Back to the JPetStore application environments page, we can create a new environment.

rt6

We give the environment a name, chose the blueprint we just created, and select the base resource where we want to insert this resource template.  This base resource is key and depends on how you have organized your resource tree.  If you organized things well, you can insert this new resource along side the other resource nodes that define other environments for this application.

When you click save, you get an error (that is horribly named) but it helps you to know that you have a step to perform yet.  We have to assign real agents to where we had agent prototypes.  Click on the newly created environment and we see its resource tree.  We can assign a real agent to each node in our tree using the Action menu.

rt7

We now have a new environment with real agents assigned to our components, ready for deploy.  Well, almost.  Can you think of what might yet have to be defined?  How about environment properties?  Those will need to be defined if needed.  But once you do that, deploy away.