Search This Blog

Saturday, January 21, 2017

About Those Post-Election Audits ...

After the election, efforts to recount votes in Michigan and Pennsylvania were denied on the grounds there is no evidence that the electronic voting systems were hacked, the basis of the request. Independent of this, in light of evidence uncovered by the U.S. intelligence agencies involving Russia’s hacking of election-related systems, questions about the integrity of the election have been raised.

To my mind, the concern about attackers compromising election systems is important but not entirely on the mark. Questioning the integrity of the election does not require suspicion of attackers compromising the electronic voting systems. The poor quality of the software on these voting systems is sufficient to raise concerns. We have multiple analyses showing this. I don’t understand why the lawsuits did not emphasize this.

A claim that electronic voting system software is so poor that the results of the election could be incorrect requires some substantiation. Here are three specific, documented problems that could cause the results of an election to be compromised. I was on the teams that found these.

  1. Failure to install security updates. These updates fix vulnerabilities that attackers can exploit to take over the computer. They may delete or alter information. Clearly this can alter the results of an election.In 2004, Maryland commissioned a study of the Diebold AccuVote-TS systems it would be using in the next election. The study, conducted by RABA Technologies, “identified fifteen additional Microsoft patches that have not been installed on the servers. In addition, the servers lack additional measures (all considered best practice) for defense such as the use of firewall antivirus programs as well as the application of least privilege, i.e. turning off the services that are unused or not needed.” [1, p. 21]. The team used one of these unpatched vulnerabilities to gain complete control of the vote-counting system in under 30 minutes.
  2. Failure to check for integer overflow. When computers count, they cannot handle numbers that are too big. As a simple example, consider a type of computer called a “16 bit system”. Such a computer can represent the numbers 0 to 65,535 inclusive, but no others. If you add 1 to 65,535, the result will “wrap around” to be 0. Checking for this is crucial in an electronic voting system to avoid errors. In 2006, the California Voting Systems Technology Assessment Advisory Board analyzed the Diebold AccuVote Optical Scanner (version 1.96.6). The analysis team found that “the AV-OS source code has numerous places where it manipulates vote counters as 16-bit values without first checking them for overflow, so that if more than 65,535 votes are cast, then the vote counters will wrap around and start counting up from 0 again” [2, p. 18]. The source code did not accept more than 65,535 ballots, but if the vote counter started at any non-zero number (for example, 1), overflow could occur.
    Similarly, the report on the analysis of the ES&S iVotronic electronic voting system (version 8.0.1.2) says the “software also contained array out-of-bounds errors, integer overflow vulnerabilities, and other security holes” [3, p. 57].
  3. Incorrect handling of error conditions. A mark of good programming is that, when something goes wrong, the program logs the error and takes action to minimize the impact of the error. The occurrence of the error should be clearly identified and not create problems beyond those immediately resulting from the failure.
In the analysis of the ES&S iVotronic electronic voting system (version 8.0.1.2), a team of forensic analysts identified a problem of this type. These systems have two types of Personal Electronic Ballots, a Voter PEB and a Supervisor PEB. When a voter is to vote, a Voter PEB is inserted into the iVotronic to set it up for that voter. The PEB is then removed and the voter votes. If the iVotronic has a particular configuration, the software then queries the PEB to get its serial number — and, as the PEB has been removed, the software records the serial number as 0, rather than that off the PEB actually used. As a result, the voter’s votes are recorded correctly, but the PEB’s serial number is recorded incorrectly as 0. The log shows a successful vote with a PEB having serial number 0, which is not possible and raises the question of whether the voter’s votes were recorded correctly. But the problem is not recording votes; the problem is simply recording a serial number. [3, §6.2.1.2]

All of these relate to security because the analyses were done in the context of examining the security of the systems. All arise from poor programming.

The point is that, without a thorough analysis of the current software, we must assume the software has many problems with robustness. Thus it is not necessary to claim that an attack may have occurred to assert the results are suspect; the evidence from the software that has been analyzed gives one ample reason to assert the results are suspect.

The RABA report and the VSTAAB reports sum this situation up:

“True security can only come via established security models, trust models, and software engineering processes that follow these models; we feel that a pervasive code rewrite would be necessary to instantiate the level of best practice security necessary to eliminate the risks we have outlined in the previous sections.” [1, p. 23]

“This is a good example of the need for defensive programming. If code had been written to check for wrap-around immediately before every arithmetic operation on any vote counter, Hursti’s technique of loading the vote counter with a large number just less than 65536 would not have worked.” [2, p. 18]

So, what can (and should) be done? That depends on the requirements of an election. In the United States, everyone agrees on at least three of these:
  1. Accuracy: the final tallies should reflect the votes that the voters intended to cast.
  2. Anonymity of the ballot: No one should be able to link a ballot to an individual.
  3. Secrecy of the ballot: No one should be able to prove to another how he or she voted.
If we are to use computers to record, tally, and report the votes, we need software that is robust, reliable, easy for voters to cast their votes on, and easy for the operators to operate. Note that the casting of votes may not involve a computer. The voter may mark a paper ballot, and then the ballot scanned. The scanning, and resulting electronic representation of the ballot, would then be used by computers.

A fourth requirement that is rarely stated explicitly, but is implicit, is that of transparency. This requirement basically says that the process of the election must be public, and that a voter can observe the entire election process, except for watching an individual voter marking his or her votes. An implication of this is credibility — the election must not only meet its requirements, but it must also be seen to meet the requirements. And here’s the rub.

When we say “transparency”, transparent to whom? Voters? Election officials? The vendors of electronic voting equipment? Computer scientists? Politicians? The public at large? The answer to this question will control many facets of the election process that affect its credibility. The reason is the use of computers.

Contrast how voting occurs on paper with that on an electronic voting machine (sometimes called a “Direct Recording Electronic”, or DRE, machine). The observer, standing in the polling station, can watch the voter being handed a ballot, going into a voting booth, coming out of the booth with the ballot in hand, and then inserting it into the ballot box. The observer knows that the voter’s votes were recorded on the ballot, and the ballot is in the box that will be carried to Election Central (if she has any doubt, she can follow the ballot box to Election Central). With a DRE, the observer can watch the voter being given the access code to use the DRE, the voter going to that DRE, and the voter leaving the DRE. But she cannot see the ballot being put into a transport mechanism that will be taken to Election Central. She can certainly see the flash cards or the voting system being taken to Election Central; but she cannot tell whether the ballot records what the voter thinks it records, or even if the ballot is there. She must trust the software. This is why robust, well-written software is so critical to the election process. A similar consideration applies to the counting of the ballots at Election Central.

Paper trails that show the votes cast (called “Voter-Verified Paper Trails” or VVPATs) are not sufficient for two reasons. First, VVPATs are not used for counting. They are used to validate results of the voting systems when required. This occurs during the canvass or when a recount is conducted. Thus, the VVPATs and the electronic results are rarely compared. Second, there is evidence that most voters do not review the VVPAT before they cast their vote, so there is no way to know whether the votes recorded on the VVPAT are the votes that the voter intended to cast. So while VVPATs help if voters check them, they still do not add transparency because they are not used to do the initial counting.

If the target of the transparency trusts the electronic voting equipment, then the above process is transparent. If one does not, then the entire system must be available to those for whom the transparency is intended. It certainly is for the vendors; but what about others? In the past, it has also been made available for analysis to specific individuals when the state mandated that those individuals have access (usually because of some problem, or for testing). But this access required that a non-disclosure agreement, or something similar, be signed. The electronic voting equipment was not available to others, like voters.

So, if the election process is to be transparent to voters, all of the electronic voting equipment used in that process must be accessible to voters. The voters can then inspect the hardware, software, and all other components to assure themselves (to whatever degree of assurance they desire) that the requirements are met.

This includes the software that runs the equipment. It does not matter who creates the software so long as it is “open source”, i.e. available to anyone who wants to see it. But that is not enough. It is possible to corrupt hardware, or ancillary components such as scanners or keyboards, and the voters must be able to assure themselves that this will not happen (again, to whatever degree of assurance they require), so that too must be open source, and the manner in which everything is assembled to create the systems used in the election process be public and precise enough so others can verify it. Note voters may need to work with specialists to make this determination. The point is, they can do so, and choose the specialists they trust rather than rely on specialists selected by others.

By the way, the term “open source”, as used in the technical community, has many meanings. All uses require that the source code be available to anyone who wants it. The differences lie in the way that software can be used. For example, must changes to open source software be open source? Under a license called the GPL, yes; under a license called the BSD license, no. This distinction is irrelevant for transparency, and hence for our purposes. What matters is the software used during the election process can be examined by disinterested parties.

Elections are the cornerstone of our republic. All aspects of the process by which they are conducted should be open to those most affected by it, the voters. Currently, it is not, as the software involved in elections is closed source, and the details of the electronic voting systems (and the systems themselves) are not available for public scrutiny. I hope in the future this changes, so that questions about the voting systems raised in the last 4 presidential elections can be settled and, ideally, avoided.

Acknowledgement.Thanks to Candice Hoke for pointing out that the legal cases involved attacks on the electronic voting systems, and not that they might produce incorrect results due to other problems.

References.
  1. RABA Innovative Solution Cell, “Trusted Agent Report Diebold AccuVote-TS Voting System”, RABA Technologies LLC, Columbia, MD 21045 (Jan. 2004). Available at http://nob.cs.ucdavis.edu/~bishop/notes/2004-RABA/index.html
  2. D. Wagner, D. Jefferson, M. Bishop, C. Karlof, and N. Sastry, “Security Analysis of the Diebold AccuBasic Interpreter”, Technical Report, Voting Systems Technology Assessment Advisory Board, Office of the Secretary of State of California, Sacramento, CA 95814 (Feb. 2006). Available at http://nob.cs.ucdavis.edu/~bishop/notes/2006-inter/index.html
  3. A. Yasinsac, D. Wagner, M. Bishop, T. Baker, B. de Medeiros, G. Tyson, M. Shamos, and M. Burmester, “Software Review and Security Analysis of the ES&S iVotronic 8.0.1.2 Voting Machine Firmware”, Security and Assurance in Information Technology Laboratory, Florida State University, Tallahassee, FL (Feb. 2007). Available at http://nob.cs.ucdavis.edu/~bishop/notes/2007-fsusait-1/index.html

Thursday, November 10, 2016

Some Thoughts on the Recent Election

To all who are upset at the results of the Presidential election, remember this, from Lord of the Rings by JRR Tolkien:

“I wish it need not have happened in my time,” said Frodo.

“So do I,” said Gandalf, “and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”
And to those upset with the failure of California’s Proposition 62 (to eliminate the death penalty in the state), more from Lord of the Rings about Gollum:
“No, and I don’t want to,” said Frodo. “I can’t understand you. Do you mean to say that you, and the Elves, have let him live on after all those horrible deeds? Now at any rate he is as bad as an Orc, and just an enemy. He deserves death.”

“Deserves it! I daresay he does. Many that live deserve death. And some that die deserve life. Can you give it to them? Then do not be too eager to deal out death in judgement. For even the very wise cannot see all ends.”
Peace.

Wednesday, October 19, 2016

Oh, Those Presidential Debates!

I didn’t see either the first presidential debate, but I did see the second and third. I also saw several of the debates for the primaries.

I’d love to see the debates restructured so all candidates sat in soundproof booths that are clear so the audience can see them. They would be wired for sound so the candidates could hear one another and the moderator. Each booth would have a microphone. The microphones would be off unless the candidates was supposed to be speaking, for example to answer a question from a moderator or the audience. So if one candidates were answering, the others could not interrupt.

I think that would make the debates much saner — one candidate could not talk over the other or snipe at them until their turn.

And at least it would make the debates easier to follow!

Thursday, September 8, 2016

Customers as Shareholders

Here’s an idea for improving customer service.

The goal of American, and other, companies is to keep their stockholders happy. Paying executives large bonuses, automating as many aspects of their work as possible, and outsourcing functions are all attempts to increase the revenue that stockholders receive. Customer service is simply one of these functions. As long as the customers come, it does not matter how poor customer service is. When customers cease to come, then the company will try to improve its customer service.

This is particularly pernicious when few vendors supply a needed service. Consider air travel. Currently, three airlines — American, Delta, and United — dominate the U.S. airline market All treat the customers as revenue generators, not human beings. The width of seats has shrunk to the point where some people cannot fly coach for medical reasons; the pricing structure of tickets is incomprehensible; and the customer bears the brunt of any problems. Witness United’s and Delta’s recent computer breakdowns. The result: customers could rebook without charge (the airlines emphasized the latter). Now, if a customer misses a meeting because the flight is delayed, the airline does not provide any compensation. Rather one-sided — and the customer can rarely move his business to another airline, because all of the airlines operate the same way. The customer produces money. Complaints? Doesn’t affect their shareholders, so they won’t do anything.

Improving customer service requires an incentive. The bottom line is keeping stockholders happy, and this usually (but not always) involves profits. If profits fall, the money stockholders make falls, and they will become discontent — and possibly change company management, or dump their stock, causing the price of the stock to drop, thereby hurting the company’s financial position. If profits fall, companies will try to find the cause and correct it. So, tie customer service to stockholder satisfaction. The belief that improving customer service improves revenue, and therefore stockholder satisfaction, simply is not working, especially when the company is a monopoly or near-monopoly. This suggests making the tie more direct.

Here’s the idea: make the customer a shareholder. For example, when I fly on (say) Delta, I receive a share of Delta’s voting stock. If I fly often, I get more stock; indeed, this could be handled like a “frequent flier” (loyalty) program. Now, I have a direct voice in the overall management of the company because I can vote my stock at a shareholder’s meeting. Further, I can band together with others of like mind to make our stock be a larger block and hence increase the strength of our voices.

Now as a shareholder, the company will want to keep me happy; and it can do so not just by revenue, but also by making my flights more comfortable. And to do so it will have to reverse the trend of poor customer service.

Tuesday, August 9, 2016

Certification, Software, and People

I recently sat through an excellent talk in which the speaker reported on a Gartner meeting in which the software industry captains said their biggest problem was finding people who know how to write secure, robust code. He then said that academia had to start producing these programmers.

In the Q&A period, I pointed out that people who practice “secure coding” (really, low assurance programming, as opposed to no assurance programming!) and forms of much higher assurance programming require more enterprise resources and time to develop, implement, test, and document their software than most employers are currently providing their programmers. My question was how to persuade corporate managers and executives to understand and be willing to provide (and pay for) the additional resources, and allow the additional time, to enable the developers to produce high-quality software.

The speaker’s reaction was interesting. He agreed that this was a problem, but one that could be solved by requiring software to be certified. He said that now if a programmer refuses to deliver software until he or she could ensure it was robust (secure), they would probably be fired for not meeting deadlines. But if software industry practices were modified to require software certification, and gave the programmer the duty to certify the software met some standard, the programmer could (and presumably would) refuse to sign the certification. Then the managers could not fire the programmer without giving up the signature. The company could not market the software without the certification. He drew an analogy with lawyers and the legal profession. Lawyers are certified, and the profession disciplined miscreants. He thought that the same should be done for software developers and programmers.

Let’s split his answer into two parts. The first part is that software products should be certified before being marketed; the second, that software engineers and those involved in the development of software should also be certified. Both are more complex than they seem.

First, consider the idea of certifying something as secure. What exactly does “secure” mean here? The definition of “secure” requires a security policy against which the software’s security can be defined and, ideally, measured. But security is not just a property of the software. It is a property of the software, the environment, and the installation, operation, and maintenance. If any of these cause the software not to satisfy the security policy, then the software is not secure. So any certification will need to specify not only the security policy, but also the environment and other aspects of where and how it is to be used. And even then, there will be problems.

A good example is the certification process that was used in the United States for electronic voting machines. There, both software and hardware had to be certified to meet specific requirements, usually from a set of standards promulgated by the U. S. Election Assistance Commission. But the original standards did not take into account the processes and environments in which the machines were to be used. As a consequence, the certifications would have been inadequate even if the testing labs had thoroughly tested both the hardware and software — and, unfortunately, the quality of at least one lab was so poor it was closed.

Neglecting the processes and environment leads us to another problem — who does (or should do) the assessment for the certification? Presumably, a set of independent laboratories vaguely similar to the Underwriters Laboratories for electrical elements would be authorized to do this. Unless theses labs co-operate, however, the scenario that developed with electronic voting systems may arise. In that scenario, the vendor paid for the evaluation by the lab. If the system failed the testing, the vendor would be told how it failed and could then fix it and resubmit it for certification. But then the vendor could request a different lab to recertify it; the vendor need not return to the original lab. Thus a vendor could seek a lab that gave more frequent favorable reviews, and concentrate its certification requests there. This would give the labs financial and business incentives to find systems meet the requirements, in order to improve their chances of gaining repeat business from the vendor. Various approaches can ameliorate this situation, but ultimately laws and regulations would control the methods chosen, and their effectiveness depends on what they say and how they are enforced. The result could well be like the electronic voting system certifications — a certification system that is far from robust.

Next, let’s look at the recommendation to certify programmers and software developers. To what standard or standards should these individuals be held? Should those standards be a function of what they are developing (such as an operating system, a mobile phone app, or a text editor)? A function of what tools they use to build it (such as a particular development environment such as Eclipse)? A function of the environment to which the system is to be deployed? Or some combination of these factors? For example, would the certification be general (for virtually any system or set of tools, or for both) or specific (for writing programs in the programming language XYZZY for the PLUGH operating system, using the Magic Source Code Scanner)? If a program or system fails in the field, is the programmer liable? And if a programmer is certified to work in a specific environment, with specific operational and maintenance requirements, how would one ensure those requirements and environment were maintained? How will those requirements be changed, and how will programmers ensure their certification continues to meet those changed requirements?

Complicating this is the fact that programmers rarely work alone; they usually are employed by a company, and work in teams. In particular, companies will not want to increase the cost of the systems they deliver, nor the time to delivery, because customers will not be willing to pay more, or wait longer, for the systems they want. So if liability is tied to the programmer(s), the company has no financial incentive to give the programmer(s) the resources and time they need to develop the secure systems. The only consequence to the company would be that programmers and developers would likely migrate to companies that provided the needed resources and support. However, if the software had to be certified before it could be marketed, then the company would have an incentive — it would need to have programmers who could certify the software, and presumably those programmers themselves would need to be certified.

Given that programmers work for a company and work in teams, how would one determine the programmer(s) responsible for the software, so necessary discipline could be applied? Particularly in these days of global outsourcing, when much of the hardware and software upon which we depend is created overseas, how would the U.S. ensure that those programmers or the companies that employ them (over whom the U.S. has no jurisdiction) meet certification standards? How would legacy software, which was written before certification of developers and software were instituted, be dealt with?

Certifying developers and giving them the responsibility of certifying software also ignores the question of why the developers are held responsible for what are, essentially, marketing decisions over which the company executives have control. As anyone who has worked in the software industry knows, plans for software development, including timetables and testing protocols, are often not under the control of the developers. The practice has long been to move quickly to market, and then patch software problems as they arise, rather than invest from the start in secure and robust coding practices.

All these questions and complications would need to be resolved before a credible certification system is put into place. Otherwise, the mess we are in with respect to software will only get worse. I would love to hear someone discuss these problems in more depth, and ideally come up with a way to resolve them.

And once that happens, then I am optimistic that certification will indeed improve the quality of software and systems!

Acknowledgements: Thanks to Holly Bishop and S. Candice Hoke for their valuable comments.

Tuesday, September 1, 2015

Two Thoughts on Government-Required “Back Doors”

I finally finished reading the report about giving the government access to communications [1]. It parallels an earlier report [2] discussing the same thing, but the revelations of Snowdon and he F.B.I. director’s reaction warranted a reiteration, and a stronger one. It is an excellent report, one I encourage everyone to read.

I have a suggestion for strengthening the report, and a comment about the government’s reaction to it.

First, the suggestion. The report makes the point that Sir Arthur Conan Doyle made so many years ago (“What one man can invent another can discover” [3]), but much more prosaically. It says that attackers who want to compromise the communication will find and exploit the backdoor added at government insistence. This is quite correct, and a strong argument against adding such backdoors.

But let’s take it a step further. Suppose an attacker has begun reading messages between two medical institutions (for example). She realizes that the F.B.I. will undoubtedly be interested in what she is doing, and wants to find out how much they know. As the law requires all communications equipment to have backdoors built in, and she has found the backdoor (in order to monitor the electronic medical records in the messages going between the two medical institutions). She exploits that knowledge and monitors the F.B.I.’s communications. She finds out she is indeed being tracked, and now can see how far the F.B.I. has gone in discovering what she is doing and who she is.

Lest this seem fanciful, it has happened. In “The Athens Affair” [4], Prevelakis and Spinellis write “the hackers broke into a telephone network and subverted its built-in wiretapping features for their own purposes.” The lawful (under Greek law) wiretapping capability enabled the attackers to tap the cell phones of the Greek prime minister, the defense and foreign affairs ministers, and — note this — law enforcement officials.

The obvious solution is to give government agencies communications equipment that does not have backdoors while requiring everyone else to use equipment with backdoors. But this raises two immediate problems.

First, if such equipment is manufactured, it will become available to others not authorized to have it — through pilferage if nothing else, or from equipment made abroad. Foreign manufacturers are highly unlikely to go along with the backdoor requirement — and if they do, how does the U.S. government know that only they can use the backdoor? Much of our software and hardware is manufactured abroad, using components manufactured in places like China. What is to stop a foreign government from requiring the vendor to add a backdoor to the backdoor to allow them in? This is called the supply chain problem, and is essentially a type of the insider problem, one of the most difficult problems in computer and information security.

Second, if it is possible to make exceptions, various people and organizations will lobby for those exemptions. As an example, an obvious exception would be for the U.S. financial industry (banks, brokerage houses, the Federal Reserve, and so forth), which would be reluctant to use equipment with backdoors built in, as confidentiality and integrity are absolutely critical to their clients trusting them. Once exceptions are made, the number of exceptions will grow, as will the groups to whom those exceptions are applied. And the folks the government will want to monitor most will be the ones most likely to use those organizations granted exceptions. Terrorists, for example, will move money around the financial system.

So by adding backdoors, the government renders itself as vulnerable as everyone else, thus defeating its purpose. Angela Rojas’ comment seems appropriate, as it summarizes the dilemma very succinctly: “But, darling, the spies were spying on the spies who were spying on the spies!”

The report should have made this point explicitly.

My comment about the government’s reaction to the report refers to Mr. Comey’s testimony that technologists are smart enough to find a way to put in a backdoor that only law enforcement can use:

Technical people will say it’s too hard. My reaction to that is: Really? Too hard? Too hard for the people we have in this country to figure something out? I’m not that pessimistic. [5,6]

But Mr. Comey is not a technologist, so how would he know? In fact, there are some problems that can be proved unsolvable. Perhaps the question he should be asking is whether our law enforcement agents are smart enough to be able to solve crimes without these backdoors. Certainly they have done so in the past. Mr. Comey’s comment bothers me, because as a top U.S. government official, he should have more faith in U.S. security agencies and law enforcement personnel — and if he does not, his focus should be on strengthening those agencies, and hiring and training people, rather than weakening everything else.

Note: I am one of the signers of the letter [7] that Mr. Comey refers to in his speech on cybersecurity to the Third Annual Cybersecurity Law Institute, from which the above quote comes.

References

  1. H. Abelson, R. Anderson, S. Bellovin, J. Benaloh, M. Blaze, W. Diffie, J. Gilmore, M. Green, S. Landau, P. Neumann, R. Rivest, J Schiller, B. Schneier, M. Specter, and D. Weitzner, “Keys Under Doormats: Mandating Insecurity by Requiring Government Access to All Data and Communications,” Technical Report MIT-CSAIL-TR-2015-026, Massachusetts Institute of Technology, Cambridge, MA, USA (July 2015).    url: http://hdl.handle.net/1721.1/97690
  2. B. Adida, C. Anderson, A. Antón, M. Blaze, R. Dingledine, E. Felten, M. Green, J. Halderman, D. Jefferson, C. Jennings, S. Landau, N. Mitter, P. Neumann, E. Rescorla, F. Schneider, B. Schneier, H. Shacham, M. Sherr, D. Wagner, and P. Zimmermann, CALEA II: Risks of Wiretap Modifications to Endpoints, Center for Democracy and Technology (May 2013).    url: https://www.cdt.org/files/pdfs/CALEAII-techreport.pdf
  3. Sir Arthur Conan Doyle, “The Adventure of the Dancing Men,” The Return of Sherlock Holmes, Dover Publications, Mineola, NY, USA (2010). isbn: 978-0486478739
  4. V. Prevelakis and D. Spinellis, “The Athens Affair,” IEEE Spectrum 44(6) pp. 26–33 (July 2007).    doi: 10.1109/MSPEC.2007.376605
  5. J. Comey, Comments at the Third Annual Cybersecurity Law Institute, Georgetown University Law Center, Washington DC, USA (May 2015).    url: https://www.justsecurity.org/23120/transcript-comey-authors-encryption-letter-uninformed-fair-minded/
  6. J. Comey, “FBI Director James Comey Cybersecurity Challenges,” video of [5] (quoted comment at 18:03–18:13).     url: http://www.c-span.org/video/?326168-1/fbi-director-james-comey-cybersecurity-challenges
  7. Letter to President Obama (May 19, 2015).    url: https://static.newamerica.org/attachments/3138--113/Encryption_Letter_to_Obama_final_051915.pdf

Tuesday, June 17, 2014

The Fourth Amendment to the Constitution and California Senate Bill SB-828

The 4th Amendment to the United States Constitution says:

“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”

California Senate Bill SB-828, “The 4th Amendment Protection Act”, proposes to ban any California government institution, political subdivision, or employee of the state acting in an official capacity from supporting any federal agency that “claims the power … to collect electronic data or metadata of any person” in a way that violates the 4th Amendment.

Like any law, this bill has good points and bad points. Section 7599.5(d) bars the use of any such information in a “criminal investigation or prosecution”, even if the information was obtained by a corporation. That currently is not banned, and adding something like that into the law is a good idea — a very good idea.

Unfortunately, the rest of the bill misses the point, and that far outweighs the good part. The bill should be defeated. Here’s why.

First, the authors of the bill object to the gathering and use of “electronic data or metadata” in violation of the 4th Amendment. But there is no objection to non-electronic data or metadata. Why not? It seems to me the threat is not just electronic; it also includes physical data and metadata such as notations in a paper log, or stopping and searching cars without cause. However, the bill doesn’t object to that.

Along those lines, the bill objects to “material support, participation, or assistance” to any federal agency that claims the above-mentioned power. But if a California county, city, or other agency claims the power, the bill is silent about providing material support, participation, and assistance to that agency. This seems hypocritical at best — it’s disallowed if the feds do it, but it’s fine if the state does.

The bill also is aimed at the wrong people — it targets Californians rather than federal agencies (and, presumably, their employees acting on behalf of the agencies). It’s rather like targeting the homeowner because the burglar was able to break into the house and steal things; the target should be the burglar, who committed the crime, and not the homeowner, who may have “materially assisted” the burglar by not properly locking up the house before running to the store. A better approach would be to make the federal agencies involved accountable in some way, or have the agents of that agency arrested for violating the rights of the citizens of California.

Also, the bill is speculative. It focuses on “any federal agency that claims the power to authorize …”. The problem, of course, is that the 4th Amendment is in the Constitution, and therefore any agency “claiming the power” is claiming it is above the law. So if the federal agency does not “claim the power”, do the restrictions in the bill apply? This is important, because I know of no group that claims the power to violate the law with impunity.

If the goal is to disallow the state, its subdivisions, agencies, or its employees from aiding federal agencies in violations of the 4th Amendment, why not just say so directly? It seems to me that language like “to aid a federal agency in the collection of electronic data or metadata …” would achieve this end.

Next, what is “material support, participation, or assistance”? Does supplying power or water to California buildings containing offices of such agencies qualify? How about providing medical transport for injured employees? If the bill said “material support, participation, or assistance to aid a federal agency in the collection of electronic data or metadata …”, then the restriction is clear: if the agency is collecting the electronic data or metadata in violation of the 4th Amendment, they can’t get help doing so from the California government. But the bill doesn’t say that.

Section 7599.5(b) begins: “[u]tilize any assets or public funds, in whole or in part, to engage in any activity that aids a federal agency, federal agent, or corporation while providing services to the federal government …” If I make a telephone call to a federal agency to request a proposed public standard, does my use of a state telephone mean I violate the act? Is that an “activity” that aids them in some unknown way? The vagueness here is frightening.

Similar remarks apply to Section 7599.5(c), which deals with providing services. How generally is the term “services” defined, and exactly when is the provision of services banned? It seems to me that basic life and medical services fall into this category.

And ultimately, the bill does not say who decides whether a “federal agency … claims the power, by virtue of any federal law, rule, regulation, or order, to collect electronic data or metadata of any person” in a way that violates the 4th Amendment. Must the agency announce it? Does a judge decide? The California state legislature? This is critical, because interpretations of what “violates” the 4th Amendment differ. For example, if an agency obtains a warrant, does a search, and later a federal judge tosses the warrant out as violating the 4th Amendment rights of the subject of the search, does that mean those California employees who provided “material support, participation, or assistance in any form”, or used “assets or public funds”, or provided “services” to that federal agency go to jail for violating the law?

I admire and laud the spirit of the bill — violations of people’s right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures” need to be stopped. But this bill isn’t the way to do it.