To my mind, the concern about attackers compromising election systems is important but not entirely on the mark. Questioning the integrity of the election does not require suspicion of attackers compromising the electronic voting systems. The poor quality of the software on these voting systems is sufficient to raise concerns. We have multiple analyses showing this. I don’t understand why the lawsuits did not emphasize this.
A claim that electronic voting system software is so poor that the results of the election could be incorrect requires some substantiation. Here are three specific, documented problems that could cause the results of an election to be compromised. I was on the teams that found these.
- Failure to install security updates. These updates fix vulnerabilities that attackers can exploit to take over the computer. They may delete or alter information. Clearly this can alter the results of an election.In 2004, Maryland commissioned a study of the Diebold AccuVote-TS systems it would be using in the next election. The study, conducted by RABA Technologies, “identified fifteen additional Microsoft patches that have not been installed on the servers. In addition, the servers lack additional measures (all considered best practice) for defense such as the use of firewall antivirus programs as well as the application of least privilege, i.e. turning off the services that are unused or not needed.” [1, p. 21]. The team used one of these unpatched vulnerabilities to gain complete control of the vote-counting system in under 30 minutes.
- Failure to check for integer overflow. When computers count, they cannot handle numbers that are too big. As a simple example, consider a type of computer called a “16 bit system”. Such a computer can represent the numbers 0 to 65,535 inclusive, but no others. If you add 1 to 65,535, the result will “wrap around” to be 0. Checking for this is crucial in an electronic voting system to avoid errors. In 2006, the California Voting Systems Technology Assessment Advisory Board analyzed the Diebold AccuVote Optical Scanner (version 1.96.6). The analysis team found that “the AV-OS source code has numerous places where it manipulates vote counters as 16-bit values without first checking them for overflow, so that if more than 65,535 votes are cast, then the vote counters will wrap around and start counting up from 0 again” [2, p. 18]. The source code did not accept more than 65,535 ballots, but if the vote counter started at any non-zero number (for example, 1), overflow could occur.
Similarly, the report on the analysis of the ES&S iVotronic electronic voting system (version 188.8.131.52) says the “software also contained array out-of-bounds errors, integer overflow vulnerabilities, and other security holes” [3, p. 57].
- Incorrect handling of error conditions. A mark of good programming is that, when something goes wrong, the program logs the error and takes action to minimize the impact of the error. The occurrence of the error should be clearly identified and not create problems beyond those immediately resulting from the failure.
All of these relate to security because the analyses were done in the context of examining the security of the systems. All arise from poor programming.
The point is that, without a thorough analysis of the current software, we must assume the software has many problems with robustness. Thus it is not necessary to claim that an attack may have occurred to assert the results are suspect; the evidence from the software that has been analyzed gives one ample reason to assert the results are suspect.
The RABA report and the VSTAAB reports sum this situation up:
“True security can only come via established security models, trust models, and software engineering processes that follow these models; we feel that a pervasive code rewrite would be necessary to instantiate the level of best practice security necessary to eliminate the risks we have outlined in the previous sections.” [1, p. 23]
“This is a good example of the need for defensive programming. If code had been written to check for wrap-around immediately before every arithmetic operation on any vote counter, Hursti’s technique of loading the vote counter with a large number just less than 65536 would not have worked.” [2, p. 18]
So, what can (and should) be done? That depends on the requirements of an election. In the United States, everyone agrees on at least three of these:
- Accuracy: the final tallies should reflect the votes that the voters intended to cast.
- Anonymity of the ballot: No one should be able to link a ballot to an individual.
- Secrecy of the ballot: No one should be able to prove to another how he or she voted.
A fourth requirement that is rarely stated explicitly, but is implicit, is that of transparency. This requirement basically says that the process of the election must be public, and that a voter can observe the entire election process, except for watching an individual voter marking his or her votes. An implication of this is credibility — the election must not only meet its requirements, but it must also be seen to meet the requirements. And here’s the rub.
When we say “transparency”, transparent to whom? Voters? Election officials? The vendors of electronic voting equipment? Computer scientists? Politicians? The public at large? The answer to this question will control many facets of the election process that affect its credibility. The reason is the use of computers.
Contrast how voting occurs on paper with that on an electronic voting machine (sometimes called a “Direct Recording Electronic”, or DRE, machine). The observer, standing in the polling station, can watch the voter being handed a ballot, going into a voting booth, coming out of the booth with the ballot in hand, and then inserting it into the ballot box. The observer knows that the voter’s votes were recorded on the ballot, and the ballot is in the box that will be carried to Election Central (if she has any doubt, she can follow the ballot box to Election Central). With a DRE, the observer can watch the voter being given the access code to use the DRE, the voter going to that DRE, and the voter leaving the DRE. But she cannot see the ballot being put into a transport mechanism that will be taken to Election Central. She can certainly see the flash cards or the voting system being taken to Election Central; but she cannot tell whether the ballot records what the voter thinks it records, or even if the ballot is there. She must trust the software. This is why robust, well-written software is so critical to the election process. A similar consideration applies to the counting of the ballots at Election Central.
Paper trails that show the votes cast (called “Voter-Verified Paper Trails” or VVPATs) are not sufficient for two reasons. First, VVPATs are not used for counting. They are used to validate results of the voting systems when required. This occurs during the canvass or when a recount is conducted. Thus, the VVPATs and the electronic results are rarely compared. Second, there is evidence that most voters do not review the VVPAT before they cast their vote, so there is no way to know whether the votes recorded on the VVPAT are the votes that the voter intended to cast. So while VVPATs help if voters check them, they still do not add transparency because they are not used to do the initial counting.
If the target of the transparency trusts the electronic voting equipment, then the above process is transparent. If one does not, then the entire system must be available to those for whom the transparency is intended. It certainly is for the vendors; but what about others? In the past, it has also been made available for analysis to specific individuals when the state mandated that those individuals have access (usually because of some problem, or for testing). But this access required that a non-disclosure agreement, or something similar, be signed. The electronic voting equipment was not available to others, like voters.
So, if the election process is to be transparent to voters, all of the electronic voting equipment used in that process must be accessible to voters. The voters can then inspect the hardware, software, and all other components to assure themselves (to whatever degree of assurance they desire) that the requirements are met.
This includes the software that runs the equipment. It does not matter who creates the software so long as it is “open source”, i.e. available to anyone who wants to see it. But that is not enough. It is possible to corrupt hardware, or ancillary components such as scanners or keyboards, and the voters must be able to assure themselves that this will not happen (again, to whatever degree of assurance they require), so that too must be open source, and the manner in which everything is assembled to create the systems used in the election process be public and precise enough so others can verify it. Note voters may need to work with specialists to make this determination. The point is, they can do so, and choose the specialists they trust rather than rely on specialists selected by others.
By the way, the term “open source”, as used in the technical community, has many meanings. All uses require that the source code be available to anyone who wants it. The differences lie in the way that software can be used. For example, must changes to open source software be open source? Under a license called the GPL, yes; under a license called the BSD license, no. This distinction is irrelevant for transparency, and hence for our purposes. What matters is the software used during the election process can be examined by disinterested parties.
Elections are the cornerstone of our republic. All aspects of the process by which they are conducted should be open to those most affected by it, the voters. Currently, it is not, as the software involved in elections is closed source, and the details of the electronic voting systems (and the systems themselves) are not available for public scrutiny. I hope in the future this changes, so that questions about the voting systems raised in the last 4 presidential elections can be settled and, ideally, avoided.
Acknowledgement.Thanks to Candice Hoke for pointing out that the legal cases involved attacks on the electronic voting systems, and not that they might produce incorrect results due to other problems.
- RABA Innovative Solution Cell, “Trusted Agent Report Diebold AccuVote-TS Voting System”, RABA Technologies LLC, Columbia, MD 21045 (Jan. 2004). Available at http://nob.cs.ucdavis.edu/~bishop/notes/2004-RABA/index.html
- D. Wagner, D. Jefferson, M. Bishop, C. Karlof, and N. Sastry, “Security Analysis of the Diebold AccuBasic Interpreter”, Technical Report, Voting Systems Technology Assessment Advisory Board, Office of the Secretary of State of California, Sacramento, CA 95814 (Feb. 2006). Available at http://nob.cs.ucdavis.edu/~bishop/notes/2006-inter/index.html
- A. Yasinsac, D. Wagner, M. Bishop, T. Baker, B. de Medeiros, G. Tyson, M. Shamos, and M. Burmester, “Software Review and Security Analysis of the ES&S iVotronic 184.108.40.206 Voting Machine Firmware”, Security and Assurance in Information Technology Laboratory, Florida State University, Tallahassee, FL (Feb. 2007). Available at http://nob.cs.ucdavis.edu/~bishop/notes/2007-fsusait-1/index.html