Tuesday, August 9, 2016
In the Q&A period, I pointed out that people who practice “secure coding” (really, low assurance programming, as opposed to no assurance programming!) and forms of much higher assurance programming require more enterprise resources and time to develop, implement, test, and document their software than most employers are currently providing their programmers. My question was how to persuade corporate managers and executives to understand and be willing to provide (and pay for) the additional resources, and allow the additional time, to enable the developers to produce high-quality software.
The speaker’s reaction was interesting. He agreed that this was a problem, but one that could be solved by requiring software to be certified. He said that now if a programmer refuses to deliver software until he or she could ensure it was robust (secure), they would probably be fired for not meeting deadlines. But if software industry practices were modified to require software certification, and gave the programmer the duty to certify the software met some standard, the programmer could (and presumably would) refuse to sign the certification. Then the managers could not fire the programmer without giving up the signature. The company could not market the software without the certification. He drew an analogy with lawyers and the legal profession. Lawyers are certified, and the profession disciplined miscreants. He thought that the same should be done for software developers and programmers.
Let’s split his answer into two parts. The first part is that software products should be certified before being marketed; the second, that software engineers and those involved in the development of software should also be certified. Both are more complex than they seem.
First, consider the idea of certifying something as secure. What exactly does “secure” mean here? The definition of “secure” requires a security policy against which the software’s security can be defined and, ideally, measured. But security is not just a property of the software. It is a property of the software, the environment, and the installation, operation, and maintenance. If any of these cause the software not to satisfy the security policy, then the software is not secure. So any certification will need to specify not only the security policy, but also the environment and other aspects of where and how it is to be used. And even then, there will be problems.
A good example is the certification process that was used in the United States for electronic voting machines. There, both software and hardware had to be certified to meet specific requirements, usually from a set of standards promulgated by the U. S. Election Assistance Commission. But the original standards did not take into account the processes and environments in which the machines were to be used. As a consequence, the certifications would have been inadequate even if the testing labs had thoroughly tested both the hardware and software — and, unfortunately, the quality of at least one lab was so poor it was closed.
Neglecting the processes and environment leads us to another problem — who does (or should do) the assessment for the certification? Presumably, a set of independent laboratories vaguely similar to the Underwriters Laboratories for electrical elements would be authorized to do this. Unless theses labs co-operate, however, the scenario that developed with electronic voting systems may arise. In that scenario, the vendor paid for the evaluation by the lab. If the system failed the testing, the vendor would be told how it failed and could then fix it and resubmit it for certification. But then the vendor could request a different lab to recertify it; the vendor need not return to the original lab. Thus a vendor could seek a lab that gave more frequent favorable reviews, and concentrate its certification requests there. This would give the labs financial and business incentives to find systems meet the requirements, in order to improve their chances of gaining repeat business from the vendor. Various approaches can ameliorate this situation, but ultimately laws and regulations would control the methods chosen, and their effectiveness depends on what they say and how they are enforced. The result could well be like the electronic voting system certifications — a certification system that is far from robust.
Next, let’s look at the recommendation to certify programmers and software developers. To what standard or standards should these individuals be held? Should those standards be a function of what they are developing (such as an operating system, a mobile phone app, or a text editor)? A function of what tools they use to build it (such as a particular development environment such as Eclipse)? A function of the environment to which the system is to be deployed? Or some combination of these factors? For example, would the certification be general (for virtually any system or set of tools, or for both) or specific (for writing programs in the programming language XYZZY for the PLUGH operating system, using the Magic Source Code Scanner)? If a program or system fails in the field, is the programmer liable? And if a programmer is certified to work in a specific environment, with specific operational and maintenance requirements, how would one ensure those requirements and environment were maintained? How will those requirements be changed, and how will programmers ensure their certification continues to meet those changed requirements?
Complicating this is the fact that programmers rarely work alone; they usually are employed by a company, and work in teams. In particular, companies will not want to increase the cost of the systems they deliver, nor the time to delivery, because customers will not be willing to pay more, or wait longer, for the systems they want. So if liability is tied to the programmer(s), the company has no financial incentive to give the programmer(s) the resources and time they need to develop the secure systems. The only consequence to the company would be that programmers and developers would likely migrate to companies that provided the needed resources and support. However, if the software had to be certified before it could be marketed, then the company would have an incentive — it would need to have programmers who could certify the software, and presumably those programmers themselves would need to be certified.
Given that programmers work for a company and work in teams, how would one determine the programmer(s) responsible for the software, so necessary discipline could be applied? Particularly in these days of global outsourcing, when much of the hardware and software upon which we depend is created overseas, how would the U.S. ensure that those programmers or the companies that employ them (over whom the U.S. has no jurisdiction) meet certification standards? How would legacy software, which was written before certification of developers and software were instituted, be dealt with?
Certifying developers and giving them the responsibility of certifying software also ignores the question of why the developers are held responsible for what are, essentially, marketing decisions over which the company executives have control. As anyone who has worked in the software industry knows, plans for software development, including timetables and testing protocols, are often not under the control of the developers. The practice has long been to move quickly to market, and then patch software problems as they arise, rather than invest from the start in secure and robust coding practices.
All these questions and complications would need to be resolved before a credible certification system is put into place. Otherwise, the mess we are in with respect to software will only get worse. I would love to hear someone discuss these problems in more depth, and ideally come up with a way to resolve them.
And once that happens, then I am optimistic that certification will indeed improve the quality of software and systems!
Tuesday, September 1, 2015
I finally finished reading the report about giving the government access to communications . It parallels an earlier report  discussing the same thing, but the revelations of Snowdon and he F.B.I. director’s reaction warranted a reiteration, and a stronger one. It is an excellent report, one I encourage everyone to read.
I have a suggestion for strengthening the report, and a comment about the government’s reaction to it.
First, the suggestion. The report makes the point that Sir Arthur Conan Doyle made so many years ago (“What one man can invent another can discover” ), but much more prosaically. It says that attackers who want to compromise the communication will find and exploit the backdoor added at government insistence. This is quite correct, and a strong argument against adding such backdoors.
But let’s take it a step further. Suppose an attacker has begun reading messages between two medical institutions (for example). She realizes that the F.B.I. will undoubtedly be interested in what she is doing, and wants to find out how much they know. As the law requires all communications equipment to have backdoors built in, and she has found the backdoor (in order to monitor the electronic medical records in the messages going between the two medical institutions). She exploits that knowledge and monitors the F.B.I.’s communications. She finds out she is indeed being tracked, and now can see how far the F.B.I. has gone in discovering what she is doing and who she is.
Lest this seem fanciful, it has happened. In “The Athens Affair” , Prevelakis and Spinellis write “the hackers broke into a telephone network and subverted its built-in wiretapping features for their own purposes.” The lawful (under Greek law) wiretapping capability enabled the attackers to tap the cell phones of the Greek prime minister, the defense and foreign affairs ministers, and — note this — law enforcement officials.
The obvious solution is to give government agencies communications equipment that does not have backdoors while requiring everyone else to use equipment with backdoors. But this raises two immediate problems.
First, if such equipment is manufactured, it will become available to others not authorized to have it — through pilferage if nothing else, or from equipment made abroad. Foreign manufacturers are highly unlikely to go along with the backdoor requirement — and if they do, how does the U.S. government know that only they can use the backdoor? Much of our software and hardware is manufactured abroad, using components manufactured in places like China. What is to stop a foreign government from requiring the vendor to add a backdoor to the backdoor to allow them in? This is called the supply chain problem, and is essentially a type of the insider problem, one of the most difficult problems in computer and information security.
Second, if it is possible to make exceptions, various people and organizations will lobby for those exemptions. As an example, an obvious exception would be for the U.S. financial industry (banks, brokerage houses, the Federal Reserve, and so forth), which would be reluctant to use equipment with backdoors built in, as confidentiality and integrity are absolutely critical to their clients trusting them. Once exceptions are made, the number of exceptions will grow, as will the groups to whom those exceptions are applied. And the folks the government will want to monitor most will be the ones most likely to use those organizations granted exceptions. Terrorists, for example, will move money around the financial system.
So by adding backdoors, the government renders itself as vulnerable as everyone else, thus defeating its purpose. Angela Rojas’ comment seems appropriate, as it summarizes the dilemma very succinctly: “But, darling, the spies were spying on the spies who were spying on the spies!”
The report should have made this point explicitly.
My comment about the government’s reaction to the report refers to Mr. Comey’s testimony that technologists are smart enough to find a way to put in a backdoor that only law enforcement can use:
Technical people will say it’s too hard. My reaction to that is: Really? Too hard? Too hard for the people we have in this country to figure something out? I’m not that pessimistic. [5,6]
But Mr. Comey is not a technologist, so how would he know? In fact, there are some problems that can be proved unsolvable. Perhaps the question he should be asking is whether our law enforcement agents are smart enough to be able to solve crimes without these backdoors. Certainly they have done so in the past. Mr. Comey’s comment bothers me, because as a top U.S. government official, he should have more faith in U.S. security agencies and law enforcement personnel — and if he does not, his focus should be on strengthening those agencies, and hiring and training people, rather than weakening everything else.
Note: I am one of the signers of the letter  that Mr. Comey refers to in his speech on cybersecurity to the Third Annual Cybersecurity Law Institute, from which the above quote comes.
- H. Abelson, R. Anderson, S. Bellovin, J. Benaloh, M. Blaze, W. Diffie, J. Gilmore, M. Green, S. Landau, P. Neumann, R. Rivest, J Schiller, B. Schneier, M. Specter, and D. Weitzner, “Keys Under Doormats: Mandating Insecurity by Requiring Government Access to All Data and Communications,” Technical Report MIT-CSAIL-TR-2015-026, Massachusetts Institute of Technology, Cambridge, MA, USA (July 2015). url: http://hdl.handle.net/1721.1/97690
- B. Adida, C. Anderson, A. Antón, M. Blaze, R. Dingledine, E. Felten, M. Green, J. Halderman, D. Jefferson, C. Jennings, S. Landau, N. Mitter, P. Neumann, E. Rescorla, F. Schneider, B. Schneier, H. Shacham, M. Sherr, D. Wagner, and P. Zimmermann, CALEA II: Risks of Wiretap Modifications to Endpoints, Center for Democracy and Technology (May 2013). url: https://www.cdt.org/files/pdfs/CALEAII-techreport.pdf
- Sir Arthur Conan Doyle, “The Adventure of the Dancing Men,” The Return of Sherlock Holmes, Dover Publications, Mineola, NY, USA (2010). isbn: 978-0486478739
- V. Prevelakis and D. Spinellis, “The Athens Affair,” IEEE Spectrum 44(6) pp. 26–33 (July 2007). doi: 10.1109/MSPEC.2007.376605
- J. Comey, Comments at the Third Annual Cybersecurity Law Institute, Georgetown University Law Center, Washington DC, USA (May 2015). url: https://www.justsecurity.org/23120/transcript-comey-authors-encryption-letter-uninformed-fair-minded/
- J. Comey, “FBI Director James Comey Cybersecurity Challenges,” video of  (quoted comment at 18:03–18:13). url: http://www.c-span.org/video/?326168-1/fbi-director-james-comey-cybersecurity-challenges
- Letter to President Obama (May 19, 2015). url: https://static.newamerica.org/attachments/3138--113/Encryption_Letter_to_Obama_final_051915.pdf
Tuesday, June 17, 2014
“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
California Senate Bill SB-828, “The 4th Amendment Protection Act”, proposes to ban any California government institution, political subdivision, or employee of the state acting in an official capacity from supporting any federal agency that “claims the power … to collect electronic data or metadata of any person” in a way that violates the 4th Amendment.
Like any law, this bill has good points and bad points. Section 7599.5(d) bars the use of any such information in a “criminal investigation or prosecution”, even if the information was obtained by a corporation. That currently is not banned, and adding something like that into the law is a good idea — a very good idea.
Unfortunately, the rest of the bill misses the point, and that far outweighs the good part. The bill should be defeated. Here’s why.
First, the authors of the bill object to the gathering and use of “electronic data or metadata” in violation of the 4th Amendment. But there is no objection to non-electronic data or metadata. Why not? It seems to me the threat is not just electronic; it also includes physical data and metadata such as notations in a paper log, or stopping and searching cars without cause. However, the bill doesn’t object to that.
Along those lines, the bill objects to “material support, participation, or assistance” to any federal agency that claims the above-mentioned power. But if a California county, city, or other agency claims the power, the bill is silent about providing material support, participation, and assistance to that agency. This seems hypocritical at best — it’s disallowed if the feds do it, but it’s fine if the state does.
The bill also is aimed at the wrong people — it targets Californians rather than federal agencies (and, presumably, their employees acting on behalf of the agencies). It’s rather like targeting the homeowner because the burglar was able to break into the house and steal things; the target should be the burglar, who committed the crime, and not the homeowner, who may have “materially assisted” the burglar by not properly locking up the house before running to the store. A better approach would be to make the federal agencies involved accountable in some way, or have the agents of that agency arrested for violating the rights of the citizens of California.
Also, the bill is speculative. It focuses on “any federal agency that claims the power to authorize …”. The problem, of course, is that the 4th Amendment is in the Constitution, and therefore any agency “claiming the power” is claiming it is above the law. So if the federal agency does not “claim the power”, do the restrictions in the bill apply? This is important, because I know of no group that claims the power to violate the law with impunity.
If the goal is to disallow the state, its subdivisions, agencies, or its employees from aiding federal agencies in violations of the 4th Amendment, why not just say so directly? It seems to me that language like “to aid a federal agency in the collection of electronic data or metadata …” would achieve this end.
Next, what is “material support, participation, or assistance”? Does supplying power or water to California buildings containing offices of such agencies qualify? How about providing medical transport for injured employees? If the bill said “material support, participation, or assistance to aid a federal agency in the collection of electronic data or metadata …”, then the restriction is clear: if the agency is collecting the electronic data or metadata in violation of the 4th Amendment, they can’t get help doing so from the California government. But the bill doesn’t say that.
Section 7599.5(b) begins: “[u]tilize any assets or public funds, in whole or in part, to engage in any activity that aids a federal agency, federal agent, or corporation while providing services to the federal government …” If I make a telephone call to a federal agency to request a proposed public standard, does my use of a state telephone mean I violate the act? Is that an “activity” that aids them in some unknown way? The vagueness here is frightening.
Similar remarks apply to Section 7599.5(c), which deals with providing services. How generally is the term “services” defined, and exactly when is the provision of services banned? It seems to me that basic life and medical services fall into this category.
And ultimately, the bill does not say who decides whether a “federal agency … claims the power, by virtue of any federal law, rule, regulation, or order, to collect electronic data or metadata of any person” in a way that violates the 4th Amendment. Must the agency announce it? Does a judge decide? The California state legislature? This is critical, because interpretations of what “violates” the 4th Amendment differ. For example, if an agency obtains a warrant, does a search, and later a federal judge tosses the warrant out as violating the 4th Amendment rights of the subject of the search, does that mean those California employees who provided “material support, participation, or assistance in any form”, or used “assets or public funds”, or provided “services” to that federal agency go to jail for violating the law?
I admire and laud the spirit of the bill — violations of people’s right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures” need to be stopped. But this bill isn’t the way to do it.
Monday, May 26, 2014
This criminally neglectful treatment of flora has to stop. Trees, grass, and flowers didn't cause the lack of water that Davis apparently suffers from. People did. Yet we are to punish the innocent flora for the sins of humans? This is outrageous!
Let’s start taking responsibility for our actions. If people caused the need to conserve water, let people bear the brunt of conservation. Continue to feed the flora with water so they can survive, nay thrive, and be happy as they convert carbon dioxide to the pure oxygen that we all breathe. Let humans take the hit. Stop taking showers and baths — those send lots of water down the drain. Stop washing clothes; the pollutants in your soaps mean the water you use cannot be recycled, and so is wasted. Stop washing your cars and drive with dirty windows — that will save even more water. Drink fruit juice instead of the Davis water — apparently it’s better for you, given the claimed impurities of the Davis water, and it leaves the water for the plants, who will happily absorb it.
Let’s take responsibility for the problems we create, and deal with them ourselves, without starving the poor plants upon whom our lives depend!
Monday, December 30, 2013
Suppose a company builds a product using supplies obtained from a third party (the “supplier”). That supplier inserts something into the software (or hardware) allowing them access to the computer on which it is used. This is a backdoor to that supplier. But how would we classify it, using John’s categories?
In this case, the company and their customer are in effect both “customers” of the supplier. But the company is an intermediary, which makes it both customer and company in the sense that John is talking about. So, the supplier might disclose a feature of its product to the company, which may not disclose that feature to the customer (perhaps it is not something that the customer needs to know, or the company thinks it unimportant). What is it — a feature, bug, or backdoor?
In this case, I think you need to have two answers: one for the supplier to company relationship (in which the company plays the role of customer) and one for the company to customer relationship. Unfortunately, the ultimate customer must rely on the company to protect him or her. And the company must rely on the supplier to protect it, and its customers.
What makes this really tricky is when the “third party” is neither the supplier nor the company and does not disclose the change to anyone. If someone else (the “interceptor”) changes the product on its way to the company (or from the company on the way to the customer), then the change is completely unintentional on the supplier’s or company’s part, but quite intentional on the third party’s part. So, which is it — a feature, bug, or backdoor?
This is most likely not an academic question, as some government agencies have been accused of doing this. The current accusation making headlines is that the U.S. National Security Agency can do this).
Probably the simplest way to handle this is to look at these terms from the point of view of the customer. In this case, the interceptor’s addition is not a feature (as it is not transparent to the end customer), nor is it a bug (as some entity made the change quite intentionally). So it is a backdoor. But this analysis is unfair to the supplier and the company, as they did nothing intentionally. So again, to them too, it is a backdoor, and they are as much victims as the customer.
Friday, June 21, 2013
I was reading a science fiction book, “The Stainless Steel Rat for President”, by Harry Harrison , and came across a passage that I found interesting and, given that the book was written in 1982, surprising.
An opposition candidate has discovered that there was fraud in an election, which was conducted on computers. He quotes from the planet’s constitution :
Due to the nature of electronic voting and due to the necessity of assuring that the voting is always recorded with utmost accuracy and due to the invisibility of the votes once they have been recorded in the voting machine, ...
I found that a remarkably perceptive, and also a very succinct, statement of a key problem with computerized voting systems. To me, that this statement was made in a book that appeared in 1982, makes it even more remarkable.
By the way, the passage goes on to say that if it is proven beyond doubt that the record of votes in a single voting machine is altered, then the election is null and void, and a new one must be conducted using paper and ballot boxes, and no subsequent election can use the computerized voting machines until the newly-elected President has them investigated.
- Harry Harrison, The Stainless Steel Rat for President, Bantam Books, New York, NY, USA (1982).
- ibid., p. 166
Sunday, June 2, 2013
What follows is not an obituary; others can write about the minutiae of his life, and of his many accomplishments, better than I. These are simply some of my memories.
I met Jeffrey some time in the 1990s, at (I believe) a workshop on incident response. Alan Paller, a mutual friend, introduced us. At the time, Jeffrey was the director of the Critical Infrastructure Assurance Office (the CIAO), and if memory serves me correctly also on the National Security Council. (I may be off on the dates — he was on the NSC, but it may have been after the meeting.)
Jeffrey struck me as very friendly and willing to discuss various aspects of security. He gave a good talk at the workshop.
We reconnected in the mid-2000s, at a meeting for the Institute for Information Infrastructure Protection (I3P); he was the representative for Carnegie-Mellon, and I for UC Davis. Our interests complemented each others’. Jeffrey knew quite a bit about technology, and far more about politics, how governments worked and interacted with one another, and about national security — topics I was very interested in. I knew a lot about the technology he was interested in, and I was learning about the other aspects he was an expert in. So we became friends, and colleagues.
Jeffrey was very good-natured, and seemed to be constantly amused at life. He was a joy to work with; he was very perceptive, and could get to the heart of an issue very quickly. He also asked questions that caused us to look at something in a new way. Talking to him was usually exciting, and I looked forward to it.
Our work on attribution sprang from a workshop on the Global Environment for Network Innovation (GENI) project, held in Davis. He was one of the attendees, and joined several others to come to my house for a small get-together; then we all went to dinner. He was charming to my wife Holly, and she immediately liked him.
That workshop also produced the only time I ever saw him irritated with me. He, another friend and colleague, and I were discussing attribution, and we decided to meet at 8am, before the workshop workday, to consider some ideas. Well, I overslept, and our friend was also late — and Jeffrey had been up since 6am working on the ideas. He quickly forgave us, though.
The work wound up in a paper on what attribution requirements the future Internet might require. His political expertise combined with our technical expertise to develop what I thought were interesting (and somewhat unexpected) results. We chose a rather impish title for the paper: “The Sisterhood of the Traveling Packets”. It was great fun to see him handle the questions about the title at the workshop — he ended up suggesting that the questioners ask any parent of a teenage girl!
Since then, Jeffrey and I worked together on topics ranging from attribution to the insider problem. Indeed, when he passed away, we had just completed a short write-up on whether the Internet could be controlled, and we were sending it around to see if we could get funding to explore that issue in more depth.
Jeffrey could take a complex political and governance issue and explain it in very simple terms. His book, Creeping Failure: How We Broke the Internet and What We Can Do to Fix It, did a nice job of explaining how we wound up with the Internet we have today, and how the problems arose and were magnified during its growth. His comparison of this to the evolution of cities was quite enlightening, and one I’d never heard of before.
He also enjoyed talking about books and history. I remember when we were at a workshop in New Hampshire. I was telling him about a book I was reading (The Illuminatus! Trilogy), and he seemed to enjoy the idea behind the book; I recommended it very highly. The same happened a couple of years later, when we were talking about counterfactual history. I told him about another book, Pastwatch: The Redemption of Christopher Columbus, which is an alternate history of Columbus' voyage. I hope he did read them; he would have enjoyed them very much.
I remember him being the general chair of the Workshop on Governance of Technology, Information, and Policies (GTIP) held in 2009. There were problems with the organization of the workshop, through no fault of anyone in particular. What they were are not relevant, but what is relevant is how Jeffrey adapted to the problems and out of them moulded a very successful workshop. His vision and skill in working with people, and his determination, made the workshop successful.
I spoke with Jeffrey on the Wednesday before he died; he told me he had been in the hospital for some time, which was why he was so hard to get hold of, but he thought he was getting better. But he wasn’t sure, and was still very, very tired. So we talked a bit, made plans to finish up a project we were working on, and agreed to talk on Thursday. I called at the prearranged time, and he told me he needed to rest, so could we talk on Friday? Of course, I said; and called him then. He didn’t answer. He never will.
There is an expression we use when we talk about people who are very, very good deep inside; we say they have a good soul. He had a good soul. My life was much richer for knowing Jeffrey — both professionally and, more importantly, personally. I will miss him very much.