Search This Blog

Friday, November 25, 2011

Assurance Evidence of the Airport Scanner Software?

The use of X-ray backscatter technology in airport scanners (the “Advanced Imaging Technology”, or AIT, scanners) to screen prospective flyers has unleashed a storm of controversy over the safety of the devices and the protection of the images of people who go through the devices.

The technology involved in the AIT scanners uses radiation to scan the human body for irregularities that may indicate substances banned from flight. The use of radiation raises safety concerns. The TSA has repeatedly stated that the devices meet Federal standards for safety. Others have questioned this, asking that the data used to certify the devices be released.

The TSA has said that these systems are critical to their mission. For such systems, an independent agency should certify that the software in these systems works correctly, and be designed to shut down the system should the software detect any failures in itself or the system. But how the agency performs the analysis and testing leading to certification is critical to understanding the certification.

Failure of Certified Systems

Certification does not mean the system works correctly, or even that it actually meets all the requirements to be certified. Electronic voting systems are an example of this. Software for electronic voting systems must be certified according to standards dictated by the state in which those systems are used. Most states use the voluntary federal standards. Yet systems certified to meet those standards continue to exhibit problems. For example, a new audit procedure conducted after an election in 2008 in Humboldt County, California revealed a bug in the vote counting system that resulted in 197 votes from one precinct not being counted. Premier Election Solutions, the vendor, acknowledged a problem with its software caused the votes to be dropped. [1]

This was not an isolated instance. Independent evaluations of electronic voting systems have found numerous deficiencies that adversaries could exploit. Some were possible only when people running the systems did not follow given procedures, which—given the fallible nature of human beings—is a possibility that must be considered. Others simply required attackers to exploit flaws in the systems. Still others were a result of the configuration of the systems after they were installed; they were not properly locked down, even when the manufacturer’s directions were followed. For a detailed discussion, see the reports from the California Top-to-Bottom Review and the Ohio Project EVEREST study, among others.

The point is that software and systems certified by federally accredited testing laboratories failed to provide the security, accuracy, and protection required to perform the functions for which the systems were designed. People not involved in the certification process discovered these deficiencies after the systems were certified.

AIT Scanners and Certification

Now relate this to the AIT scanners. There is little to no detail available on the software test procedures, the source code analysis procedures, and indeed on any penetration tests in which the goal of the testers is to subvert the software to (for example) fail to provide proper interlocking, to deliver a dangerously high dose of radiation, or to enable images to be stored or transmitted.

The concerns about safety reflect past problems with medical software. The well-known Therac-25 accidents are archetypal [2]. A study in 2001 identified 383 software-related medical device failures that, fortunately, caused no deaths or serious injuries [3]. More recently, an error in resetting a CT machine exposed patients to much higher doses of radiation than expected [4], and in 2010 an FDA study of recalled infusion pumps identified software defects as one of the most common types of reported problems [5]. In the past 25 years, health-critical software has failed repeatedly. What evidence is there that the software in the AIT scanners cannot fail in similar fashion?

The claim that storing images is impossible raises two issues. Some models used by the U. S. Marshals Service have done exactly that [6]. TSA states that in test mode, the systems can store, export, and print images but that the TSA officers at the airport cannot place the machines in test mode [7]. Does the software prevent this—and if so, how? Can someone else put the machines into test mode? Is the software implementing the test mode active, although unused? If so, could the software be instructed to turn on those parts of test mode that allow the images to be stored, for example by malware? How are the procedural controls that prevent TSA employees and other from taking pictures of the display implemented? What degree of assurance exists that violations of these controls would be detected?

The TSA web site has testing reports about the advanced imaging technology (AIT) in use. The report from the Johns Hopkins University Applied Physics Laboratory report notes several places where software is used, but does not discuss the software validation or testing procedures, or whether the software itself was analyzed in detail. To be fair, portions of the report are blacked out, so some of this information may be there. The other public reports do not mention software.

In other contexts (specifically, electronic voting systems), vendors argued that making detailed analyses public would reveal details that would enable the compromise of the software or system. The Principle of Open Design, a fundamental security design principle, states that security should never depend solely on secrecy of a design. “Security through obscurity” is sometimes acceptable as a layer of defense, but never as the only defense. The software should be robust enough, and the procedures for using it thorough enough, so knowing how the software works will not help an attacker compromise it. If the concern is that the software is proprietary, a report could describe the nature of the analysis, tests, and results while relegating proprietary information to appendices that could be redacted.

The risk of using the scanners without releasing data about the software testing and validation prevents other scientists from evaluating the testing methodology and results. This brings into question the credibility of the effectiveness of the software that does the imaging, controls access to the images, and prevents their copying or further distribution. Let us hope the TSA deals with these questions quickly and openly.


  1. K. Zetter, “Serious Error in Diebold Voting Software Caused Lost Ballots in California County— Update,” Wired (Dec. 8, 2008); available at
  2. N. Leveson and C. Turner, “An Investigation of the Therac-25 Accidents,” IEEE Computer 26(7) pp. 18–41 (July 1993).
  3. D. Wallace and D. Kuhn, “Failure Modes in Medical Device Software: An Analysis of 15 Years of Recall Data,” International Journal of Reliability, Quality & Safety Engineering 8(4) pp. 351–371 (Dec. 2001).
  4. C. Phend, “CT Safety Warnings Follow Radiation Overdose Accident,” MedPage Today (Oct. 15, 2009); available at
  5. —, Infusion Pump Improvement Initiative, Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD 20993; available at
  6. D. McCullagh, “Feds Admit Storing Checkpoint Body Scan Images,” CNET News (Aug. 4, 2010); available at
  7. Letter from G. Rossides, Acting Administrator, TSA to Congressman B. Thompson, Chairman, Committee on Homeland Security, U. S. House of Representatives (Feb. 24, 2010); available at

Sunday, November 20, 2011

Another Thought About the UC Davis Incident

Students at UC Davis are here for an education, not an assault. The students were being non-violent. Rather than listen to their concerns about tuition costs, shrinking state support, and large debts upon graduation, the University administration chose to ignore our Principles of Community and evict the students—violently.

People are calling for Chancellor Katehi’s resignation. Whether she stays or goes is not the point. The point is that never again must non-violent UC Davis students be assaulted. The administration, the police, and the students are all members of our campus community. The Principles of Community apply to all. Let us all follow them.

And I reiterate what I said before: the administration, indeed the entire University community, share the same goals as the students. Why on earth didn’t the University demonstrate support for the students by working with them to handle the health concerns, and have some folks (not police in riot gear) to be there in case safety problems arose? Work with the demonstrators, not against them. I have no idea if the news would have covered that, but I'm sure the UC Davis media folks could have made a heck of a story out of it—and I suspect the University administration would have preferred no coverage to the coverage they got.

An Opportunity Missed on the UC Davis Quad

What were they thinking?

By now, the reaction of the police at the University of California to the student protest on the Quad between Shields and the M. U. has gone viral, and the administration response was the classic two-step: first, it was justified because “the encampment raised serious health and safety concerns, and the resources required to supervise this encampment could not be sustained, especially in these very tight economic times when our resources must support our core academic mission” and “[w]e are saddened to report that during this activity, ... pepper spray was used” [1] ; then, after the video of the police officer spraying pepper spray at non-violent, sitting protesters, the “use of pepper spray as shown on the video is chilling to us all” and that the Chancellor will be forming a task force to look into this [2].

As a UC Davis faculty member, I am quite surprised that the people who made the decision to order the police in did not think of our “Principles of Community” [3] first. The first principle is:

We affirm the inherent dignity in all of us, and we strive to maintain a climate of justice marked by respect for each other.

The students were protesting tuition hikes. Tuition has almost tripled over the past decade (from $4595 in 2001-2002 to $13,080 in 2010-2011) [4]. Students are having trouble financing their education. And non-violent, civil protest is a time-honored way of making one's concerns public, in hopes of effectuating change.

The UC Davis administration, and indeed the entire UC system administration, seem not to like the tuition hikes either. President Yudolf announced that UC “will not raise tuition this school year” [5].

Suppose Chancellor Katehi had gone down to the quad, talked to the students, and (since apparently health was a concern [1]) provided some Port-a-Potties and other supplies to ameliorate the health and safety issues? Surely a lot cheaper than what will come from the current approach. And what would the news have reported—administrators and students united in the goal of making education more affordable, and better, in these tough times! I suspect that would give UC Davis much better publicity than what they are getting now, and underscored the seriousness of the problem, because the UC administration and the students would be working together instead of being in conflict.

Perhaps, had the administration given a bit more thought to the situation, and the police used a bit more restraint, the University would be able to use the money to support “our core academic mission” rather than using it for public relations, the inevitable lawsuits, and to compensate for the money that alumni will no longer contribute.


  1. Letter from Chancellor Katehi to the UC Davis Community (Nov. 18, 2011 at 9:00 PM PST)
  2. Letter from Chancellor Katehi to the UC Davis Community (Nov. 19, 2011 at 11:12 AM PST)
  3. “The Principles of Community”, University of California at Davis; available at
  4. “Annual Fees and Tuition for Full-Time Attendance, 1997-2010”, University of California at Davis; available at
  5. “UC: No Tuition Increase Even if Mid-Year Budget Trigger Pulled”, The Sacramento Bee (Nov. 8, 2011); available at

Thursday, November 17, 2011

Update to SOPA thoughts from Nov. 16, 2011

Amazingly, none of the witnesses at yesterday’s hearing knew enough about the effect of SOPA on the efforts to harden the DNS against various (very nasty) attacks. These attacks expose people to bogus web sites (among other things), raising attacks like phishing to new heights. DNSSEC, a protocol that would ameliorate much of these problems, will be much less functional should SOPA pass. (Actually, it would work until an order to associate a different IP address with a host or domain name was issued. Once the change is made—anywhere—the DNSSEC entry will show up as corrupted, and the system will break.) So an effort to make the Internet more secure (as the supporters of SOPA claim) will actually have the opposite effect. I guess it depends on how you define “security” and whom you are trying to protect. SOPA certainly won’t protect individual Internet users.

Oh, and suppose you bypass the DNS entirely—just use the IP address directly? For example, if my host is blacklisted because of some of my research, you could always go to to see the “blacklisted” work. Back to the pre-DNS days of host tables! (Note: if you're interested in problems like the “insider” problem, data sanitization, teaching robust, a.k.a. “secure” programming, and modeling election processes, please do visit. Anyone who wants to blacklist my host for those has, I think, one of several possible severe problems!)

The point is that, apparently, not only do lawmakers not understand the technology, they either don’t know what questions to ask, or they don’t want to understand the capabilities. This does not speak well for the effectiveness of legislation.

Wednesday, November 16, 2011

Random Thoughts on SOPA

It seems that the conflict between technology and the law is heating up again in the intellectual property arena. This time, it’s the “Stop Online Piracy Act”, H. R. 3261.

Regardless of what you think about the ultimate goals of the bill, the mechanisms it provides for achieving those goals are horribly counterproductive. Specifically, one mechanism the bill provides for is to allow courts to require ISPs to tamper with the DNS system to return bogus records for sites that infringe copyright, among other things. This effectively breaks the DNS because now, intermediate hosts can legitimately (well, legally) alter records being sent from the DNS server. Further, the part about requiring the DNS servers to remove those records won’t help; the records just need to be cached or stored in a DNS in a foreign country. Somehow I don’t think those countries will react kindly to an order from a U. S. company or court to remove or corrupt those records.

It’s actually worse than just that. One of the most important properties of the Internet is its robustness. In the 1990s, an earthquake shook Los Angeles, and caused one of the main providers of network connectivity between the U.S. and Asia to lose a lot of equipment. Capacity for that link dropped greatly. It took the network about 3 hours to begin routing traffic from the U. S. to Asia through Europe, and the Internet continued to function. When the provider restored its capacity, the network rearranged the load as it was before. This is not surprising, because the Internet protocols are based on those of the old ARPANET, which was designed to function in the face of catastrophic failure (like have the network disappearing due to a nuclear attack).

This bill strikes at that robustness. It bars people from building tools to circumvent the blocks that may be put into place under this bill. Assuming such a ban could be enforced (read on!), the network would no longer be robust because people could not develop ways to keep the information accessible should a block be put into place—even if the block were put into place erroneously. That defeats one of its key properties.

But could such a ban be enforced? Clearly not throughout the world; if the U. S. cannot prevent the flow of proscribed technologies to nations it considers unfriendly, how can it persuade those nations to support the ban? There’s also an irony in the U. S. supporting people who bypass restrictions on information flow imposed by “their” governments, but supporting restrictions by “our” government. Regardless of your political philosophy, the tools that can be used for one purpose can be used for another; and banning their creation to achieve one of these goals also allows the achieving of the other goal. I doubt this is what the U. S. lawmakers intend, but it is an effect of their actions should this law pass.

And how successful are the rules barring such infringement within the U. S.? It is still very easy to find software that allows you to copy protected DVDs. So I question whether such a ban could be enforced even within the U. S.

Finally, from what I understand of this bill, if you serve notice on a web site or service provider that it needs to stop providing access to an infringing site, the providers have 5 days to block all access by their subscribers. So, if (for example) someone in Congress posts a chapter from a book, could the copyright holder require that ISPs block access to that congressperson’s web site (or, for that matter, the entire Congressional web site), and that DNS records for Congress be purged? I’m not a lawyer, but the thought of it is enticing. Reminds me of the senator who said essentially that software vendors should be able to destroy the systems of people who used their software without paying for it. Turned out someone found that the web server on the Senator’s system used a commercial program that hadn’t been registered, and so had clearly not been paid for. The software disappeared after this because common knowledge.

If our lawmakers are going to legislate the use of technology, it would behoove them to understand the capabilities they are trying to control. Otherwise, they won’t realize the limits of their approaches—and unenforceable laws breed contempt for the law, which is not good.