I recently sat through an excellent talk in which the speaker reported on a Gartner meeting in which the software industry captains said their biggest problem was finding people who know how to write secure, robust code. He then said that academia had to start producing these programmers.
In the Q&A period, I pointed out that people who practice “secure coding” (really, low assurance programming, as opposed to no assurance programming!) and forms of much higher assurance programming require more enterprise resources and time to develop, implement, test, and document their software than most employers are currently providing their programmers. My question was how to persuade corporate managers and executives to understand and be willing to provide (and pay for) the additional resources, and allow the additional time, to enable the developers to produce high-quality software.
The speaker’s reaction was interesting. He agreed that this was a problem, but one that could be solved by requiring software to be certified. He said that now if a programmer refuses to deliver software until he or she could ensure it was robust (secure), they would probably be fired for not meeting deadlines. But if software industry practices were modified to require software certification, and gave the programmer the duty to certify the software met some standard, the programmer could (and presumably would) refuse to sign the certification. Then the managers could not fire the programmer without giving up the signature. The company could not market the software without the certification. He drew an analogy with lawyers and the legal profession. Lawyers are certified, and the profession disciplined miscreants. He thought that the same should be done for software developers and programmers.
Let’s split his answer into two parts. The first part is that software products should be certified before being marketed; the second, that software engineers and those involved in the development of software should also be certified. Both are more complex than they seem.
First, consider the idea of certifying something as secure. What exactly does “secure” mean here? The definition of “secure” requires a security policy against which the software’s security can be defined and, ideally, measured. But security is not just a property of the software. It is a property of the software, the environment, and the installation, operation, and maintenance. If any of these cause the software not to satisfy the security policy, then the software is not secure. So any certification will need to specify not only the security policy, but also the environment and other aspects of where and how it is to be used. And even then, there will be problems.
A good example is the certification process that was used in the United States for electronic voting machines. There, both software and hardware had to be certified to meet specific requirements, usually from a set of standards promulgated by the U. S. Election Assistance Commission. But the original standards did not take into account the processes and environments in which the machines were to be used. As a consequence, the certifications would have been inadequate even if the testing labs had thoroughly tested both the hardware and software — and, unfortunately, the quality of at least one lab was so poor it was closed.
Neglecting the processes and environment leads us to another problem — who does (or should do) the assessment for the certification? Presumably, a set of independent laboratories vaguely similar to the Underwriters Laboratories for electrical elements would be authorized to do this. Unless theses labs co-operate, however, the scenario that developed with electronic voting systems may arise. In that scenario, the vendor paid for the evaluation by the lab. If the system failed the testing, the vendor would be told how it failed and could then fix it and resubmit it for certification. But then the vendor could request a different lab to recertify it; the vendor need not return to the original lab. Thus a vendor could seek a lab that gave more frequent favorable reviews, and concentrate its certification requests there. This would give the labs financial and business incentives to find systems meet the requirements, in order to improve their chances of gaining repeat business from the vendor. Various approaches can ameliorate this situation, but ultimately laws and regulations would control the methods chosen, and their effectiveness depends on what they say and how they are enforced. The result could well be like the electronic voting system certifications — a certification system that is far from robust.
Next, let’s look at the recommendation to certify programmers and software developers. To what standard or standards should these individuals be held? Should those standards be a function of what they are developing (such as an operating system, a mobile phone app, or a text editor)? A function of what tools they use to build it (such as a particular development environment such as Eclipse)? A function of the environment to which the system is to be deployed? Or some combination of these factors? For example, would the certification be general (for virtually any system or set of tools, or for both) or specific (for writing programs in the programming language XYZZY for the PLUGH operating system, using the Magic Source Code Scanner)? If a program or system fails in the field, is the programmer liable? And if a programmer is certified to work in a specific environment, with specific operational and maintenance requirements, how would one ensure those requirements and environment were maintained? How will those requirements be changed, and how will programmers ensure their certification continues to meet those changed requirements?
Complicating this is the fact that programmers rarely work alone; they usually are employed by a company, and work in teams. In particular, companies will not want to increase the cost of the systems they deliver, nor the time to delivery, because customers will not be willing to pay more, or wait longer, for the systems they want. So if liability is tied to the programmer(s), the company has no financial incentive to give the programmer(s) the resources and time they need to develop the secure systems. The only consequence to the company would be that programmers and developers would likely migrate to companies that provided the needed resources and support. However, if the software had to be certified before it could be marketed, then the company would have an incentive — it would need to have programmers who could certify the software, and presumably those programmers themselves would need to be certified.
Given that programmers work for a company and work in teams, how would one determine the programmer(s) responsible for the software, so necessary discipline could be applied? Particularly in these days of global outsourcing, when much of the hardware and software upon which we depend is created overseas, how would the U.S. ensure that those programmers or the companies that employ them (over whom the U.S. has no jurisdiction) meet certification standards? How would legacy software, which was written before certification of developers and software were instituted, be dealt with?
Certifying developers and giving them the responsibility of certifying software also ignores the question of why the developers are held responsible for what are, essentially, marketing decisions over which the company executives have control. As anyone who has worked in the software industry knows, plans for software development, including timetables and testing protocols, are often not under the control of the developers. The practice has long been to move quickly to market, and then patch software problems as they arise, rather than invest from the start in secure and robust coding practices.
All these questions and complications would need to be resolved before a credible certification system is put into place. Otherwise, the mess we are in with respect to software will only get worse. I would love to hear someone discuss these problems in more depth, and ideally come up with a way to resolve them.
And once that happens, then I am optimistic that certification will indeed improve the quality of software and systems!
Acknowledgements: Thanks to Holly Bishop and S. Candice Hoke for their valuable comments.