Build to last: robust software development in a culture of agility
Simple software development mistakes can have costly consequences. Recent history has shown that investment in robust software development always pays off. We explore the key principles of robust software development.
It is a foggy day in Kourou, French Guinea. Tension is high among the team of engineers and experts gathered for the launch of Ariane 5 Flight 501. At 12:34 on 4 June 1996, the rocket starts its slow ascent, leaving a thick bright yellow trail in its wake. Seconds pass, nothing to report. Then, all of a sudden, the launcher veers off its flight path and explodes. What happened? An Inquiry Board investigation set up just a few days later showed that "The failure of the Ariane 501 was caused by the complete loss of guidance and attitude information […] due to specification and design errors in the software of the inertial reference system."
While other public software failures have hit the headlines since, this is still one of the most infamous bugs in history, costing the Ariane program billions of dollars.
With the rise of technology within every organization, software development has become everyone’s responsibility – or problem. Moreover, while numerous methodologies, frameworks and tools have allowed for better and more secure software development, we are still observing a lack of robustness around the measures needed to prevent another explosion.
Basic, but fundamentally important, principles
When you think about software development, do you imagine rooms full of post-it notes, boards blackened by hundreds of ideas, a constant stream of spreadsheets and specification documents? How can you keep all aspects under control when business is evolving at an incredible pace, competition is becoming feistier, and customers are demanding even more? There is lots of guidance and literature on robust software development, but these 10 basic principles are a good starting point:
- Put project governance in place to provide a repeatable and robust system with clear roles and responsibilities
- Define and communicate software development methodologies
- Follow design principles (e.g. defense in depth, fail secure)
- Incorporate security requirements related to privacy, confidentiality, availability and integrity in the methodology
- Perform adequate testing (unit, regression, quality, user acceptance, etc.)
- Include vulnerability scanning, including static and dynamic scans, as a requirement
- Track and document issues by risk level to ensure resolution prior to go-live
- Enforce segregation of duties to validate authorization and approval of changes
- Only use authorized and approved tools and technologies in the development process
- Apply appropriate governance and controls for supporting cloud-based solutions or hosting providers
Observing these principles along the software development lifecycle increases confidence that issues arising from the process will be identified and addressed – before they become problematic.
Agile and secure are not incompatible
When the Agile approach first appeared in the 90s, it already had several predecessors. Agile offers development teams a common and more structured idea of how to properly and rapidly respond to the ever-increasing demand for software. Today, more and more organizations are embracing this approach as it gives development teams the flexibility to move at their own pace, to directly interact with end users, and to continuously innovate. According to a recent survey by GoodFirms, over 61% of companies use Agile as their primary software development methodology, with Scrum (a close cousin) second at 23%.
However, the Agile approach is not always welcomed by the arm of the organization in charge of controls, whether they are internal controls, internal audit or even external parties such as customers, business partners or external auditors. The fundamental misunderstanding between the two sides is that controls are meant to address a risk, and not to satisfy a mere check-the-box documentation exercise. As long as teams are able to demonstrate that they are addressing risks related to software development, then they can get their approval stamp. For example, a key risk in software development is when unauthorized code is pushed to production, intentionally or unintentionally. Teams need to show they have implemented the appropriate measures to lower the risk of that happening, through segregation of duties, end-user testing or code migrations approvals - these are compatible with Agile.
Privacy – on everyone’s mind but not on everyone’s radar
Privacy is also critical in software development. Most organizations are vocal about having implemented appropriate policies and procedures to safeguard privacy. Some use privacy impact assessments whenever personal information is involved. In practice, though, we are seeing a worrying number of projects or teams that do not seem to comprehend the consequences of their actions with personal data – from bypassing anonymization procedures to using live personal data for testing purposes.
You can be fairly sure that some of your personal information is stored on a developer’s laptop somewhere, without the knowledge of their employer – or you. Recent regulations like the General Data Protection Regulation (or GDPR) or California Consumer Privacy Act (CCPA) are meant to enforce organizations implement more robust measures around the use of personal data. Software development is a key process that needs have built-in procedures and controls around privacy requirements. Otherwise regulatory or reputational risks can have a detrimental impact.
Is AI the answer to it all?
Recent developments in artificial intelligence (AI) are making waves in the software development world. AI-assisted software development will rely more on automated tasks (e.g. testing use cases) being performed by robots, problems (e.g. software bugs) being solved using machine learning, or even code being written by AI. Humans will be left to focus on the few tasks that need judgment, such as defining specifications and requirements. But what does this mean from a risk and controls perspective?
First, we know very well that automated does not mean secure. We should consider what is already relying on algorithms, such as code reviews and vulnerability scanning. Teams leverage numerous tools to take care of tedious and time-consuming tasks, but still have a critical role in the risk-driven decision process. For example, should a minor bug - identified by a scan - be addressed, if it means jeopardizing the go-live date? An AI could easily make the decision, but in practice, human judgment is often needed when dilemmas arise.
Second, organizations investing in artificial intelligence need to apply a robust control framework to ensure that they keep a hold on how their AI is behaving to prevent errors or bias. And even if companies are not yet investing in artificial intelligence, they can already take the necessary actions to improve how they develop software.
In any case, a commitment to robust development principles is a sound investment if you want to prevent potentially explosive consequences.