Continuous Deployment and Databases

Dealing with databases within a continuous deployment strategy can be challenging. Databases do not subscribe to the same build/deploy concepts that applications do.

  1. It is recommended that builds produce the complete and deployable application each and every time. Databases are not re-created or re-deployed each time, only changes are applied.
  2. Rollbacks are easy with applications, simply re-deploy the previous version. Rollbacks are hard with databases and must be taken into account at all times.
  3. Application file names typically stay the same from build to build. Database update file names (SQL scripts) do not necessarily stay the same at all. Each update may produce a unique set of SQL scripts and their names are typically not relevant to any other version.
  4. Initial installations of an application should be the same installation process as an update. Initial installations of a database are usually not scripted and are usually handled outside of the normal deployment process (i.e. the DBAs will handle it!!).

db

And there are no silver bullets or magic dust when it comes to automating database deployments and integrating them into your continuous delivery strategy. Here are some things to think about.

Use an industry solution РThis is not a new problem and others have attempted to create solutions to help. Liquibase is one such solution. Solutions like this make the database version-aware. Typically a table is inserted into the database to keep track of each update version, the scripts that got it to that version, and rollback scripts to get it back to the previous version. Using a solution like this really makes continuous deployment to databases more concrete and easy to build into a strategy.

Focused SQL scripts – If you don’t choose to go the route of something like Liquibase, then a more structured and focused effort needs to be put into SQL scripts to insure automation success. Here is a typical situation I run into. A DBA is brought into an UrbanCode Deploy proof-of-concept. They hand over a set of SQL scripts that coincide with an application version (which sometimes is a cause for celebration in itself). However, these SQL scripts are typically run by humans. A human watches the script and its results and proceeds with the next script in the sequence if the first one is successful. First of all, a human is in charge with determining if a script is successful or not. And you cannot rely on the fact that the script errors out to indicate failure. You may need to run a query to get a value from a table and then create something new based on the query results. Or you may need to count the presence of rows in a table to determine how many of something needs to be created. Second, a human determines what script to run next. There may be no indication of script execution order in the script file name and the only one who knows what comes next is the human.

So in order to make database updates automatable(?), you have to put some discipline into your script writing. A script needs to be able to make all the decisions programatically with no human intervention. It should error out if a problem occurs but a successful return status from a script should indicate that all is good. Also, there should be some mechanism setup to be able to programmatically determine the order of scripts to run. Either use a file name pattern or supply a separate file that lists the order of execution. Rollback scripts also need to be provided.

UrbanCode Deploy can handle either approach very well. It has a plug-in to an open source solution similar to Liquibase (the plug-in is the DBUpgrader plug-in). Or you can follow the disciplined approach and UrbanCode Deploy can execute SQL scripts in a prescribed order. Rollback scripts should follow the same pattern and UrbanCode Deploy will have a rollback process for the database component as well.

Including database updates in a continuous deployment strategy is a good thing, but is easier said than done. It requires some forethought and a strategy. Getting the DBAs to the table may be the biggest challenge to overcome ūüôā

New UrbanCode Deploy Plug-in Capabilities – Auto-Discovery

A few new plug-in capabilities were added in UrbanCode Deploy 6.0.1 that have gone relatively unnoticed.  I would like to take the opportunity to highlight them in a few posts.

First, the ability for an agent to auto-discover things is a new plug-in capability. Let’s explore how it works. The purpose of this feature is to get a jump start on configuring your resources and agent/machine properties by having an agent proactively look for the existence of things on the machine it is running on. For example, if you are a WebSphere shop you most likely make extensive use of the WebSphere plug-in. ¬†And part of that process involves capturing the location of WebSphere on a given server. The WebSphere plug-in has that capability today.

If you look at the WebSphere plug-in, you will see two unique steps in the plug-in. One is called WebSphere Discovery. You will notice if  you open the plugin.xml file that the step has a new element in its definition:

<server:type>AUTO_DISCOVERY</server:type>

This type of step causes special behavior to occur when a new agent is added to a resource in the resource tree.  Every auto-discovery step in every plug-in gets run by the agent when this occurs.  For the WebSphere plug-in, this step looks for the existence of WebSphere by searching for standard WebSphere installation locations.  If it finds it, it creates a sub-resource to represent the WebSphere cell and sets a role property on that resource defining the path to WebSphere found on that machine.

Like I said earlier, auto-discovery steps are automatically run when a new agent resource is defined in the resource tree.  Auto-configure steps are manually executed on specific resources that have auto-configure roles applied to them.

At a minimum this save you some typing.  But imaging if you have hundreds or thousands of WebSphere servers. This helps to insure that you always have the right WebSphere information for that server.

But there is more you could automatically learn about a WebSphere box.  There are WebSphere nodes, server, etc. that could also be discovered.  Or maybe you want to create some things on WebSphere once you know WebSphere is there.  The other feature is called auto-configure.

Again, there is an auto-configure step in the WebSphere plug-in.

<server:role>WebSphereCell</server:role>
<server:type>AUTO_CONFIGURE</server:type>

The additional step element indicates why type of resource role this auto-configure step can be run on.  In the WebSphere case, the previous auto-discovery step identified WebSphere and created a sub-resource identifying the WebSphere cell.  You can now run auto-configure on this WebSphere cell resource.  However, before you do, you must provide some additional information to the WebSphereCell resource, namely the WebSphere username and password.  The auto-configure step then goes out and discovers the full WebSphere architecture and creates the necessary sub-resources to capture that architecture.

The last thing that is present in the plugin.xml file are the properties associated with any of the resource roles that your auto-discover or auto-configure steps will create.  These property groups define the properties that will automatically get applied to resource roles when they are created.

In my next post we will talk about the ability to include processes and templates with your plug-in.