By Ron Lear, Vice President of Models and Frameworks at ISACA
When I first read about Cybersecurity and Infrastructure Security Agency head Jen Easterly’s perspective on software quality, my primary thoughts were “yes!” and, simultaneously, “Oh, wait, it’s not quite that simple…”
I thought for this GovCon Expert post, it would be worth a few words to talk about the reasoning behind my reaction. Let’s focus on Easterly’s key comments highlighted in the article: “Amid an epidemic of breaches, Easterly laid the blame squarely at the feet of the technology industry. ‘We don’t have a cybersecurity problem. We have a software quality problem,’ she said. ‘We have a multi-billion dollar cybersecurity industry because for decades, technology vendors have been allowed to create defective, insecure, flawed software.’”
I would argue it’s both a software quality and critical capability problem. To further this point, let’s look at a commonly recognized definition of what “quality” is from the ISO 9001:2015 standard, which says that quality is the “degree to which a set of inherent characteristics [or distinguishing features] of an object,” which in turn is defined as anything perceivable or conceivable—such as a product, service, process, person, organization, system or resource—“fulfills requirements.” You can pick nearly any other source for how “quality” is defined, and you’re going to find the same basic answer and criteria.
So, if technology vendors “have been allowed to create defective, insecure, flawed software,” the blame has to be shared equally with those customers and stakeholders who were doing the “allowing.” This means not requiring said vendors to operate without properly vetted requirements—in this case, the clear and consistent mission, system and personnel security and cybersecurity requirements.
After all, the heart of good quality is defined and bound by the customer and regulatory requirements representing their interests, isn’t it? Software development and technology vendors must create solutions based on the requirements provided by their customers, including how they verify and validate those requirements, both internally and externally. These capabilities and processes include requirements that are, at a minimum:
- Unambiguous
- Clearly and properly stated
- Complete
- Consistent with one another
- Uniquely identified
- Consistent with the architectural approach and quality attribute priorities
- Appropriate to implement (e.g. technically viable, affordable, maintainable, etc.)
- Testable
- Traceable to source
- Achievable
- Tied to business value
- Identified as a priority by the customer
In other words, software and technology vendors/contractors need consistently repeatable requirements, and development and management capability and process to deliver secure software and systems. But even that isn’t enough.
We can’t forget about how those requirements are verified and validated. Vendors also need consistent, thorough and repeatable processes for verifying and validating requirements such as testing, peer reviews and quality assurance, just to name a few. They also require a governance and implementation infrastructure to help ensure their organizations put these capabilities and processes in place.
Next, they need to measure how well they take these actions, as well as plan for how they create these solutions, including how much they cost (budget), and how long it will take to create it (schedule). Then they need to keep track of the data, components and versions and be able to estimate, plan, and monitor all of these.
Keeping count, we are now up to at least 14 capabilities—all either directly or indirectly related to quality:
- Requirements development and management
- Verification and validation (a.k.a. testing)
- Design and technical solutions
- Product integration
- Peer review
- Process quality assurance
- Governance (this one by itself is a challenge for most organizations)
- Infrastructure
- Performance measurement and analysis (rarely done consistently well)
- Estimating
- Planning
- Monitoring and control
- Configuration management
- Data management (fundamentally critical for AI)
But let’s not forget about operational security requirements, such as service systems operations, continuity, incident resolution and prevention, supply chain management, workforce management, process management and more.
So yes, once a vendor has these capabilities and business processes in place, then I would agree and argue that it is clearly a software quality problem.
Of course, my next thought is: “Wouldn’t it be nice if there was a set of proven industry best practices somewhere, already developed and ready to implement for any government vendor?” Oh, wait, there is. It’s called the CMMI V3.0. With its origins through a federally funded research and development contract, or FFRDC, our government had already spent tax dollars to address such problems.
CMMI V3.0 now covers eight primary domains and multiple core business capabilities that are integrated in one easily tailorable and integrated model. It is easily adoptable by small, medium and large businesses, and has clear data that shows consistent outcomes and performance results (see my last GovCon Expert article for more). With all that in mind, why wouldn’t the government require vendors and contractors to follow those best practices rather than another federal cybersecurity initiative requesting a voluntarily “secure by design pledge”?
While this pledge may sound like a good and noble thing, how do you enforce or verify such an unfunded “suggestion”? Just continuing to “allow” businesses to take the pledge and then hoping for the best doesn’t seem like a viable business strategy, and certainly not one that would fly in the commercial software world.
Perhaps instead of continuing to reinvent the wheel, government agencies can leverage an existing tool to ensure organizations can put all of the above capabilities and best practices in place, and demonstrate clear and consistent performance results.