Part 1 – Node.js application on the IBM Cloud – Cloud Foundry

cloud-foundry-logoI am writing blog entries to coincide with my new YouTube videos. I have started a series of videos exploring many of the capabilities of the IBM Cloud. Part 1 focuses on deploying a simple Node.js application to the IBM Cloud using Cloud Foundry. Please watch the video and give me your feedback. CF is still a viable deployment platform and IBM will continue to provide this as a first class citizen.

One thing I didn’t point out in the video. The full Cloud Foundry command line is available as a part of the IBM Cloud CLI. IBM does not force you into a customer flavor of Cloud Foundry in any way.

CF was the foundation of the original IBM Cloud. Everything was built on top of the CF ideas of orgs, spaces, and the Cloud Foundry service broker. IBM was and still is the single largest Cloud Foundry implementation.

But things have changed. The emergence of Kubernetes and the realization that deploying new CF services around the world is not nearly as quick and nimble as necessary. Deploying new cloud services in new regions of the world requires manual work. With the IBM Cloud growing to new regions and data centers every day, having to manually run procedures and configure things is not the fast enough.

So IBM made the choice to change the underpinnings of its cloud. IBM went full in on Kubernetes and Kube is now the foundation of the IBM Cloud. IBM still supports CF but there is now a Kube layer in the middle that allows IBM to create and offer new services in new regions in a fully automated way. This has drastically impacted IBM’s ability to propagate new services around the globe to new regions quickly.

The other challenge that IBM needed to tackle was providing a dedicated Cloud Foundry offering. There is a large set of customers that don’t like the multi-tenant idea when it comes to their cloud platform. They don’t want noisy neighbors and they want some assurance that they are running their workloads on an isolated platform. This posed a problem. Setting up a dedicated Cloud Foundry instance was a very manually intensive process. Dedicated hardware needed to be acquired and then a customer implementation of Cloud Foundry deployed to it. IBM did it and it didn’t scale.

So IBM ventured out to try and containerize Cloud Foundry. This was done a bit ahead of the Cloud Foundry Foundation but has since become embraced as a first class project within the foundation. Read about it here and here. What does this do for me? This now allows one to spin up a new instance of the Cloud Foundry Application Runtime on top of a Kubernetes cluster by simply deploying a Helm chart. Voilà. We now have a way to offer on-demand Cloud Foundry instances using Kubernetes as the underlying platform. Since we can control all types of networking things with Kube clusters, IBM can now offer dedicated and siloed private implementations of Cloud Foundry running in the public cloud. Go get yourself one here.

cfeeOn-demand Cloud Foundry has been a game changer. My opinion is that this will become the standard way to get Cloud Foundry in the IBM Cloud sometime in the future.

By the way, this has also allowed IBM to provide Cloud Foundry as an offering on top of the IBM Cloud Private solution. Another key advantage of containerized Cloud Foundry.

Cloud Foundry continues to remain popular with developers. And why not, it is simple and easy and removes from developers any reason to think about plumbing. But there is no arguing with the fact that Kubernetes is currently the thing. This explains the Cloud Foundry Organization’s Eirini project which allows you to run Kubernetes instead of Diego as the Cloud Foundry orchestration engine.

Advertisements

What is Your Path to the Cloud?

I think it is fair to say that the data center is dead. It was never even alive for newer organizations or startups as they began in the cloud. But I will wager that every organization, large or small, has a “plan” to get to a cloud-based application runtime model. Let’s take a look at how you might get there.street-731820_1280

First, let’s start with the end in mind and look at what we all should be shooting for. As we speak (if you read this in the not too distant future this may not be the case), I do not believe anyone can argue that the North Star of application architectures is a micro-services container architecture managed by Kubernetes orchestration. If you haven’t jumped on the container bandwagon you are too late. The same can be said for Kubernetes as the orchestration choice. It is the de facto winner and we shouldn’t even waste time with other orchestration options. Kube has won. (That is until the next big thing, maybe I will have to write another blog entry on serverless sooner than anyone thinks). And to take advantage of all of the cloud services out there (object storage, DBaaS, streams, AI, blockchain, …) we need to deploy our Kube clusters in the cloud. So, if we know what the target looks like, how do we get there. Let’s take a look at a few paths to get there.

Containers and Kubernetes Inside the Firewall

So you want to build containerized applications and deploy them to a Kube cluster, but you are not comfortable with building your own Kube cluster and integrating all of the open source capabilities you need for a first class enterprise application platform. Why not start with one already built for you that you can run in-house? IBM Cloud Private (ICP) is your answer. ICP is a pre-built Kubernetes platform with many, many open source capabilities already built in (Helm, Terraform, Prometheus, Grafana, …). ICP also has become the target platform for new deployment models of IBM Middleware. WAS, MQ, DB2 all have versions certified and ready to deploy on ICP. Many other open source components are also available (MongoDB, RabbitMQ, Redis, …). You can deploy on almost any platform (bare metal, VSIs, OpenStack, VMWare, OpenShift, AWS, Google, …). You can even deploy Cloud Foundry on top of ICP for your own private CF environment. ICP also comes with Cloud Automation Manager, which is a Terraform deployment built it. It includes Transformation Advisor, which is a tool to help you analyze existing IBM stack apps and help you understand the work needed to “containerize” it. And the newest member of the family, Multi-Cloud Manager, helping you manage deployments across numerous ICP installations. Lot’s to absorb and lots to consume. But it is all integrated and configured for you. This is a great way to get started.

Private Cloud off-prem?

Maybe you are ready to go to cloud but are not excited with the “public” part of cloud. IBM leads the industry in “private” access to its public cloud. The first step is to create a dedicated connection to the IBM Cloud. This might be a VPN solution or most likely a Direct Connect implementation. Then, many of IBM Cloud services offer the concept of a private end point. In AWS when you create an S3 bucket you are given a public URL. With IBM’s Cloud Object Storage, you can get a private end-point that is not accessible via the public internet. You get all of the benefits of on-demand public cloud services but the security of direct access only via your private connection. On the IBM Cloud, you can create your own on-demand Cloud Foundry instance only accessible by your organization. With the IBM Cloud you can keep the “public” out of your solutions.

Lift-and-Shift / Extend to the Cloud

If your primary driver of going to cloud is to get out of the data center business, you have many options to get you there. But let’s address the gorilla in the room first. This is not necessarily a cost savings strategy. Simply lifting and shifting your existing application portfolio to be cloud hosted is not a recipe for cost savings. All of the benefits of containers and Kube are based on agility and speed and applications that are static and monolithic do not take advantage of the benefits of cloud. But there may be many reasons why getting out of the data center is a driver to the cloud. Hardware refreshes, aging data center facilities, want out of an outsourcing contract, etc, are all legitimate reasons.

Lifting and shifting is all about utilizing what you already know. Going the virtual server instance route can be relatively easy if you already have experience in virtual servers, which is almost a given these days. But creating virtual servers and re-hosting an application is time consuming and error prone as it involves lots of testing. Cloud VSIs are usually not the same as the one running your app. Better yet, if you as an organization use VMWare you can extend your VMWare infrastructure into the cloud. The IBM VMWare Cloud solution allows you to create a VMWare environment in the IBM Cloud and simply extend the tools and skills that you already have. No new tools to learn. No new skills to obtain. Continue to do what you do today. And you can use VMWare capabilities like HCX to quickly move VMs out of your data center and onto the cloud.

Testing / Development in the Cloud

Maybe you have started your journey to containers but the projected timeframe to get there is daunting. And you don’t want to continue to invest in internal infrastructure to “support” the existing applications you have that aren’t going away any time soon. The IBM Cloud for Skytap might be just what you need. Skytap is an environment-as-a-service offering. You take snapshots of existing running applications. These snapshots include everything including network configurations. You can then create new instances of these environments on-demand. This allows developers to create full environment in the cloud for development and testing without any need for additional resources in your data center. This also gives your cloud-native developers legacy application environments to be able to do their transformation work.

The IBM Focus

The journey to the cloud is more than half the battle. IBM takes a huge leadership position in helping you get to your ultimate cloud architecture. IBM also takes a very hard stance in that multi-cloud will be the norm. The Multi-Cloud Manager is the first entry in helping you manage a multi-cloud strategy. Stay tuned for more.

Next time, I will look at the IBM Kubernetes Service, a fully managed Kubernetes platform in the IBM Cloud.

 

What is the IBM Cloud?

Right off the bat, let me stress that while I do work for IBM, my thoughts and opinions in the blog post and my blog in general are mine and mine alone. Shemp

When I think about the IBM Cloud, and that is “big C” Cloud, the entire IBM Cloud portfolio; I think of Shemp or maybe Curly Joe. Zeppo Marx and Cooper Manning also come to mind. So does Daniel Baldwin. If you haven’t figured it out yet, the IBM Cloud is the least known of the cloud siblings. There is no doubt that Shemp and Curly Joe, Zeppo and Daniel all bring unique talents to their respective families. But there is no arguing that Moe, Larry, Curly, Harpo, Groucho, Chico, Payton, Eli, Alec, William, and Stephen (well maybe not William and Stephen) are more well known. While Amazon, Microsoft, and Google get the lion’s share of the “off the top of your head” references, the IBM Cloud holds its own when it comes to capabilities. But why do you not hear about it except for some very specific reference stories and some very well-done commercials. I wanted to examine some of the realities of the IBM Cloud in this blog post.

First of all, I am not going to put features and capabilities side by side for a grand comparison chart. But what I will say is that there are some areas of the IBM Cloud that shine, and IBM has many success stories to prove it. It is no secret that AWS has a huge lead, a large market share and a large mindshare. Focusing on “catching” them is probably not a smart strategy. Also, much of the standard public cloud (see VSIs) is moving to commodity territory and is a race to the bottom from a price point perspective. If you want to compare pennies per gigabyte hour or number of seconds to spin up a virtual machine, then go ahead, more power to you. But at some point, I would argue that unless you are on a pretty large scale this effort is pointless.

But what I will talk about is focus. First let’s look at how cloud is defined and get it out on the table to help the discussion. This is not as easy an answer as one would think. Is a cloud determined by its deployment model? Are private, hybrid, and public deployment models all considered cloud? Is a cloud determined by its physical deployment location? Can a cloud can be on-premises and off-premises? Does a cloud have to be IaaS or PaaS or SaaS alone or some combination of all three? Does a cloud have to be based on some type of virtualization or can it be based on bare metal machines as well?

You would think the technology analyst firms could help us out here. After all, they are in the business of analyzing and ranking cloud vendors. It doesn’t take long to see their differing opinions. The famous IaaS magic quadrant from Gartner this year puts IBM in the niche category. But Forrester Wave’s PaaS rating puts IBM a strong second. Analysts don’t see eye-to-eye in what an important perspective is when looking at cloud vendors.

IBM’s focus on cloud is unique due to its breadth of customers and breadth of its overall solution portfolio much of which fall outside of traditional “cloud.” Therefore, IBM focuses on cloud with a fisheye lens (ultra-wide). IBM sees cloud as all of the ways to look at cloud. I think I can argue that no other cloud vendor offers as much diversity in deployment model, location, and service model and combinations of them all as IBM. You might think that a private cloud has to run on-premises but that is not true. IBM offers private and dedicated versions of its public services. IBM also strives to provide a “one architecture” perspective. This provides a similar if not identical look, feel, and experience regardless of what cloud deployment model you choose. This is much easier said than done but it is a key strategy for IBM.

Another difference with the IBM Cloud is its focus on open source. Open source is a priority to IBM and is evident by its participation in numerous open source initiatives going way back to Linux and Eclipse. I believe the difference for IBM is that it makes a concerted effort to not vendor-lock a customer. That doesn’t sound right based on IBM’s proprietary solution success stories. But when it comes to the cloud, IBM wants its customers to know that an investment in targeting IBM’s cloud should not prevent them from moving that solution off of the IBM Cloud if they so choose. Let’s look at a few examples. IBM’s Bluemix was launched in June of 2014 as managed platform-as-a-service offering. But IBM did not invent their own, but instead offered a managed Cloud Foundry implementation. IBM could have invented their own, but IBM instead chose to compete with other ways of deploying Cloud Foundry and bet that it could build a business offering a public managed version. A slightly different approach was taken with serverless computing. IBM open-sourced its OpenWhisk serverless platform so that customers could deploy their own on-premises version if they were not happy with the IBM experience. The same approach was taken with blockchain and the HyperLedger project.

Another way to look at the IBM strategy is via the continuum of a customer’s journey to cloud. IBM approaches the cloud discussion with not just a target to shoot for but a path to get there. Not everyone is ready to move to a cloud-native container-based application architecture. Many of IBM’s customers strive to be there someday, but need to get there in way that does not upset their existing application base. A “typical” driving force to move to cloud is the desire to get out of the data center business. But this does not have to involve a huge IT transformation. By deploying VMWare in the IBM Cloud, customers can operate their data center in the exact same way they do today, but VMWare is instead hosted in the IBM Cloud. Customers may be ready for their first Kubernetes-based containerized app deployment but they aren’t ready for it to be off-prem. Customers can get the full Kubernetes experience with IBM Cloud Private (ICP) on-prem and then move to a cloud-based Kubernetes cluster offering when ready. Customers may even be ready to host their first public cloud application but are not comfortable with moving their corporate data off-prem. This is the one aspect of the hybrid cloud model and IBM fully embraces this paradigm offering many ways to enable hybrid applications (cloud to/from data center, cloud to cloud, etc.). IBM also readily admits that most enterprises will not standardize on a single cloud vendor, but instead take a multi-cloud approach. IBM’s Cloud Automation Manager and Multi-Cloud Manager as a part of its ICP capabilities are beginning to offer solutions to govern and manage this strategy using a single pane of glass.

One could argue that casting such a wide net creates weaknesses in some aspects of its cloud offerings where other vendors focus all their efforts. Such is the life of IBM. IBM has a truly unique perspective to this problem and an even more unique customer base that it must keep happy throughout this “journey to the cloud.” But what I do know is that when given the opportunity IBM usually is able to compete and many times win. Stay tuned to this blog as I examine more aspects of the IBM Cloud.

AI, Machine Learning, Data, Cloud, oh my

I am picking up this blog again and have lots to say. The world has changed so much both career wise and technology wise. Things change so fast and IBM is all in on cloud, data, and AI. Yes, there is the need to preserve the legacy on-prem investment that customers have made but no one can argue that cloud is the future. Getting there is the fun.

I hope to share my observations, thoughts, rants, and challenges with this blog and hope to start good conversations.

Stay tuned.

UCD, UCD+P, ICO, and PureApp

As I mentioned on my last post, lots of changes happening. I have continued to change my role and have now landed as a cloud advisor. I am excited but enough about me.

What my new role has afforded me is to continue to explore and understand the various IBM cloud software solutions out there. It is an interesting landscape and is changing faster than ever. This post delves into IBM cloud provisioning and deployment solutions but leaves the on-premises/off-premises concept OFF the table. For the most part this discussion is concerned about the ability to automatically stand up an application environment regardless of its physical proximity to developer. So before I even dive into specifics, we can talk some general capabilities. The following list is not exhaustive and there are many more capabilities that the products I talk about are capable of. These are just the ones that I am interested in for this post. 

For the purpose of this post, the definition of the deployment stack, shown here, is the total of all the layers of technology needed to be created in order for a full application to execute.

stack2

Provisioning – As I said before, in this post I am interested only in the ability to automatically stand up new environments. This concept also assumes some type of pattern or infrastructure-as-code concept to enable the automation.

Deployment – For the sake of this post, deployment refers to the automated deployment of an application including its specific application configuration applied to a provisioned environment.

GO LIVE tasks – Those tasks that must occur for an application to GO LIVE that are above and beyond simply provisioning an environment and deploying the application. Tasks such as appropriate approvals, insure the app is properly monitored, backups are in place, property security and end point compliance policies, notifications are setup when things go wrong, etc. These important tasks are part of every operations teams set of responsibilities and have a large impact on production deployments.

Pattern – The ability to capture some part or all of the stack definition in a reusable artifact.  There are two pattern capabilities that we will talk about, vSYS (virtual system) patterns and HEAT.

Let’s now take a look at the IBM tools currently in this space. Big disclaimer here. There are many many additional solutions from the IBM portfolio many of which are highly customized and include a services component. These types of solutions are highly desirable in a hybrid cloud scenario where you need brokerage services to not only serve the line of business but also the ability to manage the provisioned environments across your hybrid landscape and the ability to manage cost and charge back across that same landscape. There are outsourced solutions from IBM that target unique cloud platforms.  For the purposes of our conversation here today, we assume we have a IaaS cloud solution that we want to take advantage of. In lieu of a big description of each product, I will simply list the capabilities that it provides (there are many more but these are the ones of interest for this post).

IBM Cloud Orchestrator – provisioning, patterns, deployment, go live tasks, BPM

UrbanCode Deploy – deployment

UrbanCode Deploy with Patterns – provisioning, patterns, deployment

PureApplication – provisioning, patterns, deployment

So that doesn’t help. Lots of overlap and obviously lots of details under the covers. Each product does a thing or two very well, so let’s look at the list again and I will expand on each capability highlighting its strength(s).

IBM Cloud Orchestrator

provisioning – ICO currently can provision nodes based on both types of patterns.  ICO can provision to many virtual environment both OpenStack based and not.
patterns – ICO currently supports 2 pattern technologies, HEAT and vSYS (virtual system). The vSYS patterns are the legacy pattern type. HEAT patterns are based on OpenStack and therefore require an OpenStack implementation. ICO has full editing capabilities for vSYS patterns, however ICO does not provide an editor for HEAT patterns.
deployment – while ICO doesn’t have a separate deployment capability, you are able to build into your vSYS patterns application components and Chef scripts that can ultimately deploy applications as part of the provisioning process. However this capability is not very scalable and is precisely why deployment tools like UrbanCode Deploy were created. HEAT patterns as defined by OpenStack definition do not contain deployment-specific capabilities (more details below).
GO LIVE tasks – ICO has a large list of pre-configured integrations to common operations tools to manage your go-live tasks.
BPM – ICO has a robust BPM engine allowing you to craft an detailed process that can be initiated through a self-serve portal. This allows you to string together your provisioning, deployment, and GO LIVE tasks into a single user-driven process.

UrbanCode Deploy

deployment – UCD’s strength is application deployment including environment inventory and dashboarding.

UrbanCode Deploy with Patterns

deployment – UCD+P includes UrbanCode Deploy and relies on it to deploy application components.
patterns – UCD+P is a full HEAT pattern visual/syntax editor. UCD+P also has encorporated HEAT engine extensions that allow the heat pattern and the engine to not only provision nodes but also execute Chef recipes from a Chef server and also deploy applications using UrbanCode Deploy. The resulting HEAT pattern is truly a full stack representation as represented by the picture above in a single pattern artifact.

PureApplication

provisioning – PureApplication software has the ability to provision nodes (I will leave it at that without going into all the different flavors. For the purpose of this discussion we are interested in PureApplication software that manages PureApplication systems.)
patterns – PureApp has numerous vSYS patterns that help to automate the installation and configuration of IBM middleware. The provisioning engine is robust and can orchestrate the configuration of multiple nodes that would make up an IBM middleware topology.
deployment – in the same sense as ICO, you can add application deployment information into your patterns but the same limitations apply.

So if we cherry pick the best capabilities out of each tool, we would grab the go-live tasks and BPM from ICO, the app deployment from UCD, the HEAT pattern editing and HEAT engine extensions from UCD+P, and the IBM middleware patterns from PureApp. There, we are done. Maybe at some point in the future this will be a single PID that you can buy. But until then, is it possible to string these together in some usable way?

Again, a big disclaimer here. I am not an expert here on this entire stack but hope to drive conversation.

OK, to begin let’s take advantage of the BPM capabilities of ICO and drive everything from here. The BPM capabilities allow us to construct a process that executes the provisioning tasks and the go-live tasks with the appropriate logic. You can envision a self-serve portal with a web page that asks for specific information for a full stack deployment. Things like the app, the type of topology to deploy the app on top of, the name of the new environment, etc. ICO would need the appropriate pattern to then use to provision. Here is where ICO can “grab” the HEAT pattern from UCD+P via an integration. It will then execute the provisioning via the OpenStack HEAT engine. This HEAT engine must have the UCD+P HEAT engine extensions applied to it. Since these extensions contain application deployment capabilities, the provisioning process will also utilize UrbanCode Deploy to deploy the application components to the appropriate provisioned nodes based on the HEAT pattern. The BPM process can also call the appropriate operations products to execute the go-live tasks either before or after the provisioning step in the process. Whew!!

So what is missing? It would be great to take advantage of the PureApp IBM middleware patterns which are bar none the best and most robust installation/configuration patterns available. A general solution here would be to include the appropriate Chef recipes as part of your HEAT pattern to get the middleware installed and configured and for non-IBM middleware solutions this is your best bet. But there is a lot of orchestration involved in setting up WebSphere clusters, for example, that are not easily accomplished using Chef or Puppet. PureApp has this capability in spades. The best practice in using PureApp today is to use UrbanCode Deploy to deploy the app to a PureApp provisioned environment as the patterns in PureApp are not HEAT-based. A lot has been invested in the PureApp patterns and what the future holds as far as other pattern technologies here is uncertain today. It is important to know that PureApp is the preferred solution when it comes to provisioning systems that will utilize IBM middleware.

This is the story so far and I am sure I have holes in my description above. I welcome any feedback and experiences.

UrbanCode Deploy and SmartCloud Orchestrator (extended addition)

It has been awhile since I posted on the UrbanCode Deploy and SmartCloud Orchestrator integration to provide a self-service portal for one-click environment provisioning and application deployment. Part 3 finished with the idea that a simple generic process can be called to drive this entire effort. You can use the Self Service Portal of SmartCloud Orchestrator to provide the user interface for all of this.

I also brought up in Part 3 that there are some details that need to be worked out in order to make it a reality. At the time I did not have access to a system to work through those details. Since then I was able to meet with a team that has made it happen. I want to share some of those details for those that are interested.

genproc2If we look back at our generic process that was proposed in Part 3, there were three steps. Step 1 creates the environment from the blueprint. A few issues exist with the default “create environment” step that already exists. First, you may have more than 1 cloud connection. There needs to be a way to specify a cloud connection in this step. By all means you can specify a default connection but there needs to be a way to distinguish between cloud connections. The easiest way to do this is to find the UUID of the cloud connection and use that when specifying the connection.

The other area that is not covered are the node properties. Each node in your pattern may require property values. This is totally dependent on the pattern designer and how much flexibility is provided by the pattern. There are those that might argue that you should limit the variability of patterns but that requires unique patterns for each variable combination. Either way, there is most likely a need to specify node properties.

The easiest way to do this is to use a JSON formatted string to specify the node properties. There is a CLI entry called “getBlueprintNodePropertiesTemplate” that will return a JSON template that can be used to specify the required node properties for a given blueprint. Use this template as the basis for passing node parameters to the provisioning process.

To make all this happen, there is a REST API PUT call that provisions environments. Its URL looks like this: “/cli/environment/provisionEnvironment”. It takes a JSON input that specifies the application name, the blueprint name, the new environment name, the connection, and the node properties, among other things. It makes sense to me to create a new plug-in step that looks similar to the existing “create environment” step but adds 2 additional properties, the cloud connection and the node properties (in JSON format). You may need to get creative in how you populate the node properties JSON string. Since you can potentially have different node properties for different blueprints, you may need to work with your pattern designer to make things consistent. This is again where good cooperation between deployment and provisioning SMEs makes sense. If you want to expose any of these variables to the end user, you will have to make that part of the Self Service portal on the SCO end and pass those choices into the generic process as process properties.

The next step in our generic process is to Wait for Resources. While this step is easy is principle, it breaks down big time when we get to reality. Each environment can have more than 1 node. Even if you use the agent prototype name pattern you still will have trouble determining the agent names. The existing “wait for resources” plug-in step requires you to type in the resources to wait for. This does not lend itself to a dynamic and generic provisioning process.

The best approach here is to write a custom plug-in step to wait for the resources of the specific new environment that you are provisioning. And you will probably have to extend the REST client to add some additional methods. The first step is to get a list of all the resources that you need to wait for. You can use the “/rest/resource/resource” REST API GET method to get a JSON of all resources. You will have to parse this using the environment name (and potentially the base resource) to get all of the resources that are part of the environment. Once you get that list, you can use the “cli/resource/info?resource=” REST API GET call to retrieve a JSON of the resource status. if the “status” field shows “ONLINE”, then your resource is up and running. Another property you may want to create for this plug-in step is a wait timeout value. Waiting forever doesn’t make much sense and you would like to know if things go wrong. Building in timeout logic will insure you get some notification back whether things to well or not.

The final step is the Run Application Process step. We should be able to use the default one here.

I am hoping to post all the code at some point for this solution, but until I get approval you will have to be happy with what is here. I hope this provides some additional ammo for making that self service portal a reality.

UrbanCode Deploy and SmartCloud Orchestrator (Part 3)

In Part 1 we connected to the cloud provider and created a resource template from a virtual system pattern. In Part 2 we created an application blueprint from the resource template and mapped the application components to the blueprint. We then created a new application environment from the blueprint and the new virtual system was created. Once the environment is created and the agents come online, you can now execute your normal application deployment process onto the newly provisioned environment.

In Part 3, we will go one step further and explore what it would take to satisfy the ultimate goal of providing a true self-service environment creation mechanism for developers. Exploring this a bit further, let’s take a closer look at the use case.

The promise of cloud is that you can have readily available systems at the drop of a hat, or at least much much faster than ever before. As a developer, I have some new code that I have developed that I want to test in an isolated environment (we will explore some of the subtle but challenging details behind this idea at the end). It would be awesome if I could go to some portal somewhere and request a new environment be provisioned and my application deployed to that environment without any need for knowledge of SCO or UrbanCode Deploy. Well, this capability exists today.

To begin with, SmartCloud Orchestrator has a robust business process engine that allows you to create self-service capabilities with no need to understand what is under the covers. I have no experience in this but have seen thesco results. You can create processes and human tasks that can be executed from the SCO website. You then categorize your various self-serve processes.

The good part about this is that you have at your disposal a full development environment and run-time that can utilize existing programming concepts. Of course we will have to take advantage of the UrbanCode REST API or command line to be able to drive a deployment process.

Before going on, I want to confess that I have not had the opportunity to get this entire flow working from A to Z. I haven’t had access to a SCO environment and enough free time to make this all work. However, I am putting it out there because I believe this is doable.

In order to satisfy our desire to have a fully provisioned environment with an application deployed, we need to setup a process that can do the job. We can use a generic process to get our work done. There is a REST API call that can kick off a generic process and therefore our SCO self-service process can use this to drive this process. In principle, our generic process can look something like this:

genproc2

 

The first step is to provision the environment. This step requires the environment name, the application name, and the blueprint name. These must be passed into this process and therefore you need to make process properties that can be referenced by this step. NOTE: When we provisioned an environment using the UrbanCode Deploy GUI, it asks us for information that the cloud provider needs. I am not sure how that info is passed here. There is a new command line option called getBlueprintNodePropertiesTemplate, and its descriptions says that it “returns a JSON template of the properties required to provision a blueprint.” This would need to be used and populated to insure that all of the information is passed to the environment creation process. You might need to extend the create environment step to interrogate the blueprint, get the necessary properties, and insure they are all populated.If anyone out there has tried this, let me know what you find.

The other challenge we have here is that we need to insure the environment name is unique. There is an option in this step to insure that the environment name is unique and the plug-in step simply appends a random string to the end of the environment name. This poses a problem for the next step.

Step two is to wait for the environment to be provisioned. We need to wait for the resources (agents) that will come online once the provisioned nodes are spun up. If you remember, the agent names will follow a pattern. However, if we allow the previous step to make the environment name unique, we will not be able to predict the agent names. Therefore, our self-service call to this process needs to specify the environment name and insure it is unique.

Secondly, we need to somehow determine how many agents to wait for an their exact names. This will be a challenge and as of right now I am not sure how I would solve it. This would most likely require a new plug-in to be able to interrogate a blueprint, get the names of the agent prototypes, and then construct the list of agents to wait for. Again, some plug-in development is required here.

Once the agents have come up, we can now deploy our application. This step is easy enough and we can call a well known deploy process for the application to do the installation. But there is another challenge here. Deployments can either be done using a snapshot, or you have to specify the versions of each component. Snapshots are easy, but if we remember back to our original idea, a developer has some new code that he/she wants to test. Typically snapshots are not created until a group of components have been tested together. So we have a choice. We can either have the environment provisioned from an existing snapshot, and then have to manually add our own updates to test. Or we have to provide some mechanism to update/copy an existing snapshot to include a baseline plus the new stuff. This could take on lots of possibilities, but a thorough examination of the use case would be required in order to develop a solution. This also may not be the use case we care about.

One additional solution would be to go a much more custom route and write an application that does this instead of relying on existing plug-in steps. The REST API and command line API are very rich and we can ultimately get to and produce all of the information we need. But it is nice to rely on existing capabilities and processes are much more robust and flexible than a custom solution. But as we have seen above, there are enough nuances in this effort that will require some plug-in extensions or creation that it might make sense to go the fully custom application route.

Happy self-service!!! Let me know if anyone takes on this challenge.