Release Management and Software Automation with Jenkins-CI

Release Management and Software Automation with Jenkins-CI

30 January 2015

I went on a journey to bring in elements of Continuous Integration, Continuous Delivery and Continuous Deployment using Jenkins-CI as the backbone.

Firstly, an introduction to some terminology and my over-simplifications.

  • Continuous Integration (CI) - Elements of extreme programming to continuously integrate branches.
  • Continuous Delivery (CDy) - Automate and improve the process of software delivery.
  • Continuous Deployment (CDt) - Deploy any Build, or any Release, at any time.

Note, CDy and CDt are disambiguation abbreviations I have made up for the purpose of this article.

I have been using Jenkins-CI for many years and I have dabbled with other CI solutions such as Go, TFS, AppVeyor and Travis-CI. They are all good at building source code and scale to do so. Throughout my development experience, I have observed different build conventions (powershell, ant, nant, msbuild) so never know what the correct method is. Despite my contributions I never seem to find an adequate answer to: How do organisations scale their applications and automate their deliveries of complex web applications.

I work on a monolithic project where the source code is contained in one git repository. This monolithic project encompasses many components such as a database, web applications, utilities, installer scripts and automation tools. One Jenkins-CI job builds everything via a uber ant script composed of executions to other ant scripts to build Java and .NET which produces a plethora of components.

I am going to describe my journey into the beginnings of decomposing this monolithic project.

Current CI implementation pitfalls

Below are some of the pitfalls I have experienced with the current Jenkins-CI configuration. It's not all negative news since I have learned from my mistakes.

Maintenance and specialism

The maintenance of the delivery platform is a specialist task and I found that build scripts were mutilated over time to satisfy different build requirements. This resulted in convoluted build ant scripts which would call Ant and MsBuild scripts entwined with various adaptations to produce artefacts and reports.

It came to the point where developers no longer knew how to run the build scripts on their local environments.

The Jenkins-CI jobs would be created manually by the team and would not re-use conventions from other job definitions.

Browser Automation

Web Applications would require a browser and environment matrix to deploy and execute automation tests against a variety of supported browsers. The current pipeline is coupled to the project and not re-use-able by new components.

Release Management not automated

At the end of a rigorous test cycle, both automation and manual, a release would be created by a person executing a script on their machine to produce release artifacts which they would place onto a network share. These artifacts would also be a different set of artifacts than what the build used.

Virtual machines stagnating

Our Hyper-V server has about 12 Virtual Machines acting as Jenkins-CI slaves that are manually configured/maintained. This is the most expensive pitfall since this has led to Virtual Machines that have old browsers. We cannot scale to support new framework releases like .NET and Java and later browsers.

Defining a Common Interface and Experience

A common interface and experience was required for both the people and the CDy system. The tasks available to the developer would be understood and consumed by the CDy infrastructure; powered by Jenkins-CI.

The interface has to have the basic tasks:

  • build: Compile and produce release artefacts
  • test: Run Unit Tests (TestNG, JUnit, NodeUnit, NUnit, XUnit etc)
  • publish: Publishing task which could be platform specific; Nuget for .NET; sinopia for Node.js.
  • activate: A task that will activate the component within the context of the environment and package. Example, a NET component would configure IIS to host a specific directory; a database would be mounted; SSL certificates created and applied.

Continuous Concept

The diagram above shows how I think the defined tasks fit within the various Continuous facets.

People can use these scripts to deploy any release and/or a mixed a source code using the activate task. This is not strictly Continuous Deployment but starts to provides the means to accomplish this aspiration.

Grunt: A scripted task runner

Throughout my usage of ant I found that in-line scripts were often created to satisfy bespoke elements of the delivery that could not be achieved using the Ant tasks or library extensions.
I chose Grunt as the Task Runner that every component should consume and followed the task conventions set above. Being a .NET developer I was quite reluctant to introduce Node.js since it's not native to .NET; however in reflection; Grunt has satisfied my aspirations nicely.
Node.js has a very powerful package manager called npm; which allowed me to supplement Grunt with our own and public libraries offering more tasks.

Introducing a Common Automation Pipeline

Jenkins-CI Architecture

For me; Automation is more than just running some Browser tests. It's about Deployment, testing Web & Desktop Applications allowing you to be confident about the robustness of the application in specific environments.
I created an abstraction that would define the basics of the automation stages of a pipeline.

The concept was to allow individual components to have their own pipeline and release management jobs following a convention I defined.
The automation steps would operate on a package produced in the earlier 'build' pipeline task.
The package has a Gruntfile.js which adheres to some automation task conventions.

I introduced the ability to supply ancillary task arguments to determine the flavour of test being run.

  • test:[flavour:group or feature set]: Examples: test:automation, test:automation:admin

This allowed me to define behaviours in the Gruntfile to operate within a automation context and start to introduce the beginnings of a matrix.

Enabling Developers to manage their jobs

I believe that the core principal of devops is provide a service to the team. To service the team I want to create a set of tools that bridge the gap between CI specialism and empower the team to become self-sufficient. Software Automation should be part of the day-to-day business and a breeze to set-up.

In an attempt to reduce the specialism required to maintain Jenkins-CI jobs, a new devops package was created to allow developers to create/destroy jobs based on some conventional configuration stored within a components package.json file.

The beauty of the package.json file is that it already contains the project name, a semantic version and repository information. I extended package.json and added traits to support CI and CDy.

{
    "name":"Cols Web App",
    "version":"0.0.1-beta",
    "ci": {
        ".net": {
            "traits":[
                "tests",
                "publish-artefacts"   
            ]
        }
    }
}

These traits are interpreted by the devops package to produce a set of template Jenkins-CI job definitions resulting in a Build Pipeline.

Release Management

I support these three release destinations at the moment:

  1. Nuget: An internal nuget repository for .NET nuget packages.
  2. NPM: An internal NPM repository for Node.js packages.
  3. Artifactory: A repository for our components, products and projects.

By extending the devops package I implemented transient and release artifact strategies. Transient artifacts are short lived (7 days) and are intended to be available to satisfy downstream jobs. For example; I have an initial job that produces a Web archive which is uploaded to the transient location. Downstream jobs would retrieve this archive, deploy using some packages scripts, and execute Browser Automation tests.

Software Automation

I find Software Automation quite exciting, diverse and complex. It can range from executing simple unit tests (code behaviour testing) to complex integration tests (Release Deployment and Feature execution against a matrix of environments).

SaltStack configures the Virtual Machines

We use SaltStack to create our flavours of Virtual Machines that utilise a Swarm Plug-in allowing them to join the master Jenkins-CI.
A Virtual Machine is created from a base image (e.g Vanilla Windows with Service Packs) which has a Salt Minion which will configure the machine according to a set of states.
Post Salt configuration, the machine is cloned N times and joins Jenkins-CI master.

Automation Matrix

To begin supporting a complex matrix, I extended the devops package to introduce an automation trait.

ci: {
  "template": ".net",
  "traits": [
    {
      "automation": {
        "testCases": [
          { "id":"admin","name" "Administration", "archive": "Auto.zip", "browsers": ["FireFox","IE","Chrome"] },
          { "id":"profileManagement","name": "User Profile Management", "archive": "Auto.zip", "browsers": ["FireFox","IE","Chrome"] }
        ]
      }
    }
  ]
}

The intention is to map testCases to a set of job parameters. The above definition would create jobs called Administration and User-Profile-Management that will pull and deploy an archive called Auto.zip from a conventional location based on build environment variables. Auto.zip will contain Gruntfile.js where calling grunt activate will deploy the solution. grunt test:automation would execute automation tests for the browsers FireFox, IE and Chrome.
In the near future, I anticipate that grunt tasks like grunt test:automation:admin and grunt test:automation:profileManagement will be executed by the respective jobs.

The Browser Automation Virtual Machines currently run a Standalone Selenium Server. My plan is to extend this to connect to a Selenium Grid and/or connect to cloud services such as BrowserStack and Sauce Labs. The rest is theory for now....

New components are independently managed

Any new components should be source controlled in an independent repository with a separate development life-cycle, where possible, to avoid growing the monolithic repository. Instead, the monolithic application will reference these new components. This opened up a new challenge; I needed a package management solution like ivy, npm and nuget that would assemble our platform components. I will document this solution in due course.

Decomposing the monolithic application

Now that a convention exists for building and testing various components independently, the future looks exciting for decomposing the monolithic application.
While slowly extracting components I hope we can reduce overall build times (12 hours) for the monolithic application. Eventually, this monolithic application will just be a composition of components that are independently tested and released leading to faster and cheaper maintenance.

Well, that's the plan....

CI Jenkins-CI