A Practice for Building Your DevOps Delivery Pipeline
DevOps is more and more popular in this era. It can be seen as a combination between agile and lean to improve the software delivery. Searching for DevOps, it doesn’t lead us a unique result. A lot of practices to build a DevOps pipeline can be found there. In this article, we are just presenting the one practice which is using in MeU Solutions
A pipeline delivery in DevOps is a sequence of operations and the tools that perform them between our source code and our deployment system. No one’s pipeline looks the same. It can be mixed of open source and commercial products. However, you still can build your own with all at free like this practice at our company.
Following is a picture to illustrate our pipeline in practice
Starting with this pipeline is a core component – a versioning control system. There are many software and tools you can use to build & manage your software artifacts. At MeU Solutions, we are using GIT delivered through GitHub. The next component is a build system and cover practices that are important for Continuous Integration (CI). We are putting it all into practice using Jenkins – a popular open source. Next, we move right into handling artifacts that using Nexus as an artifact repository. Our software is going through a process of testing that comprise different testing variants, including Unit Test, Integration Test, and E2E Test. We will not go into details of these testing and concentrate on how to test our application from the inside/outside. When the application is tested and ready to release. It is deployed via Chef. The flow above is not always linear. Each stage is a feedback loop to provide a capability to fulfill changes to our software that adapting to new requirements. Following is a CI Flow diagram to illustrate how all teams in a project interacting through a process.
Version control in Practice: The use of version controls between operation staff and development team is one of the key differences in a DevOps environment. Version control contains all of the source code to manage the current and historical state of your code. In past years, version control is a tool which only used by the development team, but for operation organizations, it simply was not a common practice. But, now it’s a must for this team too. Now we have to use version control for all project stuffs even build scripts or cron jobs with descriptive messages living alongside them that help us understand what’s happening to the code base. One of good practice to use a version in DevOps environment is not committing your broken code – known as fails tests or won’t build – into the code base. You can setup hook to do things like run tests in development before committing into version control. For example, our team usually use a good practice is to create a hook with a script to run the unit tests and then goes specific formatters and lenters. Any issues found will Git from committing. Forcing some amount of pre-commit testing helps you stay away from catching mistakes and breaking the build
Continuous Integration (CI): There are many options to build your own CI System. You can use an Open Source such as Jenkins, or you might want to use commercial ones or even it is CI as a service like Circle CI or Travis CI. CI is not referring as a tool, it is more like a practice which goes along with some principles. At MeU Solutions, we use Jenkins Docker container for the ease of use.
Packaging & Artifact Management: By building and packaging our application and source code, it brings many values for reliability, composability, security, and shareability. However, we can say that the most important to have a Packaging & Artifact Management in place is to ensure what we have tested is exactly what is going to production or distributing to the client. Packaging provides dependency management between our code and its dependencies and between it and other pieces of software. Using RMP or Debs format to support dependency definition. For example: At MeU, we package our JAR files, puppet code, rule files or infrastructure definitions into Debs, so we can manage them in the same way. You also create multiple layers of artifacts. For example, we create JARs in Debs, then creating VMWare Image from Debs. These VMWare images are new artifacts that we deliver to the client. One of the beneficial aspects from using Packing Management is security. We don’t want to expose everything in our source code repository to the production servers. A good practice is only to let production deployment come from artifact repository and only allow your CI to write to the artifact repository. At MeU, we use Nexus to publish the software out of Jenkins.
Testing in DevOps Practice: Saying about testing, we usually think of testing as to find bugs. Actually, it is more subtle than that. Testing is to make informed and to give feedback about quality to other stakeholders in the project. In the DevOps environment, we want our testing to be fast, reliable and isolate failures, so we can shorten and amplify the feedback loop. Leaking bugs is always an issue. But sometimes (depend on the kind of the app), it is less important than the cycle time. There are many testing types can be applied for different purposes. At MeU Solutions, we approach Unit Test and Exploratory Testing for E2E. Sometimes, we use FindBugs, Fortify and GauntIT to identify flaws in the source code.
Unit Test is the cheapest and easiest practice to implement. We try to cover as much as possible the code base with the Unit Test that is up to 100% as a target, but at least it must hit to 70%. We also have to make sure Unit Tests are up to date and not neglected.
Using Behavior-Driven Development (BDD) is a good practice in testing with DevOps. The benefit of BDD focuses on writing tests in a simple end-user-behavior-centric language. In another article, we will present how to use Cucumber as a tool for BDD in the manner of DevOps at MeU Solution.
In DevOps manner, we try to automate as many as possible for most common workflow and free up our testers to dive into exploratory testing. Using mind-map to visualize the workflow and deployment strategy that helps tester identify and prioritize key viable functions required for the product. Not similar with traditional testing, testers just need to run their predefine scripts which are generated from given requirements. In DevOps, approaching exploratory testing, testers must use all skills and tools along with them. They need to combine their testing techniques, heuristics, test models, lateral & critical thinking and all tools (from simple things such as excel, python scripts, console to complex ones such as AI Tools, Burp Suite, ZAP,…) to perform their test sessions. More about exploratory testing practices, you can refer our series of “Effective Exploratory Testing” articles
Deployment System in Practice: If we have a system to automatically deploy our artifacts, we will make it continuous and hook one stage to another. You might deploy a local build server for integration test and testing environment for more integration, end2end and acceptance testing before deploying it to a production environment. We should make sure we are deploying exactly what you built and tested, don’t build it again or use a different artifact. Using one mechanism for test and another. At work, we use one build that gets targeted to a lot of platforms. We make AWS, VMWare images and so on. We bring our software to our build server to run the tests, when all get passed, we deploy it to all those platforms and retest there. We should run a smoke test to verify if the application is up and working at the end of every deployment. We use One2Automate (our test framework) or Robot Framework to run UI tests and ask our developers to tag tests with tag smoke if they were safe to run in the production to avoid messing up our test data.
There are many systems in place that you can use for your system deployment. We are using Chef as an efficient tool in our own process. Chef is an open source cloud configuration management and deployment application that help us orchestrate servers in a cloud or just in a departmental data center. Chef enables our developers and our system administrators together. We don’t have wait Operation employees to figure out how to deploy our software when having a release from Dev team. Instead, Chef moves the process into a continuous delivery model by enabling an effective and automated workflow. For instance, we use AWS to deploy our system there, when a request for enhancement made, we simply clone the existing platform to a test environment, we just shorten time to set-up servers or clusters by hand. Or we can deploy new updates to our clients from which we get immediate feedback. If the new updates don’t work, we don’t need to roll back manually. Instead, you use Chef to automatically roll back all our clients to the old production program.
Moving to DevOps needing a change in both culture and mindset. With DevOps, all teams are brought together in a process with strongly supporting from a series of tools and utility, finally is to optimize both the productivity of the development team and the reliability of operations. They strive to communicate frequently, increase efficiencies, and improve the quality of services they provide to customers. All teams in DevOps have the same common goal and have to view the entire development and infrastructure lifecycle as part of their responsibilities.
In the next articles, we will present you how to test an application in DevOps and another for a practical example with Cucumber, XRay, Jenkins, PhantomJS, and Ruby integrated together to deploy in your own DevOps.