Developing and maintaining software has come a long way since using punch cards on mainframes. Where getting feedback on new code could once take days or weeks, we’re now able to incrementally deliver value each day, be it in traditional software development or newer types of development, such as machine learning implementations or data products. However, not all organizations are there yet and many face continuous challenges.

A common software delivery cycle

Collaborating on the same code base has become much easier than maintaining punch cards in the 1960s. Source control tools have become commonplace and the de facto standard in modern technology stacks. Even legacy systems such as mainframes are being brought into the fold as part of modernization efforts. Multiple developers can work simultaneously on different features, selectively including code into specific versions of the application. Often, this new development is already being tested as a part of this process, either manually or, if you’re lucky, through some automation as part of the first steps towards continuous testing. Through this 'shift left’ in the testing cycle, issues are detected earlier at a lower cost, but the occasional incident still needs to be reactively resolved when a ticket is logged. Depending on the organization, these steps can involve actions by one or multiple teams.

Sounds familiar? Maybe the next part does too…

Pushing the limits

Agility has become the norm these days, regardless of your IT delivery model. To compete in today’s markets, businesses have an immense need for flexibility and their IT needs to support this, no matter where they fall on the waterfall-agile scale. Change requests or new initiatives can be launched or dropped at any time and rapidly executed to maximize the impact.

Internally, IT also needs to be able to quickly respond and stay relevant by leveraging the newest technologies, while operating their current stack efficiently and securely.

This agility puts a large strain on the collaboration model described above. During development, multiple teams may be implementing changes as part of the scope requested to reach these goals. Some of them may be developed independently, but others require integration alignment and collaboration in planning. Monolithic applications which are “all-or-nothing” code deployments push this to the extreme, requiring all developed code to be stable, meaning any small issue on a particular scope item can block the delivery of numerous features. Delays of multiple days or even weeks are not unheard of in this case.

Even if you’re lucky and can deliver modular releases, you may fight against your own flexibility. When delivering a large amount of incremental value, as agile likes to focus on, the associated activities required to prepare these deliveries may consume significant time and resources, with delays to actual development work. Any manual activity must be repeated over and over again for each build. As with any human work, manual activities tend to be error-prone. Investigating and resolving any issues continues to add overhead without delivering any added value.

Once your delivery is ready, the next question is how and when to deploy your package. Depending on your situation, you may face time and/or resource constraints. When you’re finally able to do so, you’re crossing your fingers that any subsequent tests both do and do not find any new issues. Better to detect them during testing than during use in production, but any bugs or defects mean that the whole cycle needs to be repeated to resolve them.

A lost cause?

If none of the above sounds familiar, your organization may be very mature in its software delivery and the content of this blog post will not be any news to you. However, if any of the above brings up memories of recent experiences, you may be wondering if efficient software delivery is a lost cause, or if there is anything we can do to improve this.

The central idea to tackle these challenges is to continuously solve them as soon as they occur. The same philosophy can be applied to both the integration of new code and its deployment. The umbrella acronym CI/CD describes the practice of Continuous Integration and Development. As mentioned above, the challenges faced occur both during the development phase and during the operation of the software. Not surprisingly, the term DevOps was coined to reflect the context during which the practices outlined here are situated, emphasizing how Development and Operations are combined both in vocabulary and practice.

Automation first

Reducing the reliance on human effort being spent on activities that add no value to the software product is key to solving many challenges. This is why CI/CD places automation centrally. By setting up automated build pipelines, as many activities as possible are scripted and orchestrated, with tools such as Jenkins already having become household terms in this space. They integrate directly with the source control system, either running ad hoc or being triggered by an event such as a commit of new code to a certain branch. The code is prepared for deployment, running through the build automation tool of choice, and run through static code analysis tools such as SonarQube and various test automation tools - such as JUnit and Selenium - as part of a continuous testing strategy to determine whether the code to be deployed is of sufficient quality. The deployment itself is automated as much as possible, provisioning cloud-based resources on the fly and defining the process within Infrastructure-as-Code based on tools such as Terraform and Helm. Manual intervention is reduced to follow-up and maintenance of these pipelines, which is significantly less, especially after proper standardization of these pipelines.

Getting ahead of things

The automation principle also extends towards the Operations side of DevOps. By implementing continuous monitoring with automated alerting and recovery, issues are detected earlier. Business impact is minimized by automatic infrastructure restarts and retrying failed processes, whilst the IT department gets more detailed information about what happened to be able to investigate and remediate the root cause. All of this can start before any incident ticket is logged by a business user. When things break and the root cause is being determined, automated tests are created to both support the remediation process as well as to ensure detection in case of future issues as part of the continuous testing philosophy. This is where the continuous collaboration between Development and Operations shines and makes a business impact.

Pick your flavor

The set of DevOps practices can also be extended to other domains. Variations such as DevSecOps, DevTestOps, MLOps, and AIOps all apply the same core concepts, either extended with extra focuses such as Security, Testing, or applied to contexts such as Machine Learning and Artificial Intelligence. At the center of any of these DevXOps is the intense collaboration and integration between all parties involved, in contrast with the silos of old. A solid CI/CD strategy with proper continuous monitoring and testing is the foundational enabler for this DevXOps environment to succeed with a rapid and reliable agile software delivery and a short feedback loop.

Step-by-step

Setting up all this automation of course cannot, and should not, happen overnight. Taking the time to iterate and apply the right architecture, framework, and principles to achieve a sustainable set up can guarantee prolonged success and added value for everyone. By starting small, such as identifying an appropriate source control branching strategy or setting up a barebones Jenkins pipeline, even with some manual steps still involved, you can already make a significant impact while working towards a more fully fledged implementation. A few lessons may need to be learned along the way before you can automatically trigger container builds and deployments to your Kubernetes cluster. However, bringing along people with experience on this journey can save you time and give you a head-start.

Realizing value

DevOps with its CI/CD practices allows your IT to move faster, reducing the time-to-market for new implementations and delivering the agility expected. Operations are smoother and at a lower cost due to increased stability and reduced human involvement. When set up successfully, your people can focus on value-adding tasks, spending less time on repetitive overhead and more on helping you advance your business objectives. It’s a win for everyone!

Author: Mathieu Samaey