Search This Blog

Friday, November 25, 2011

Assurance Evidence of the Airport Scanner Software?

The use of X-ray backscatter technology in airport scanners (the “Advanced Imaging Technology”, or AIT, scanners) to screen prospective flyers has unleashed a storm of controversy over the safety of the devices and the protection of the images of people who go through the devices.

The technology involved in the AIT scanners uses radiation to scan the human body for irregularities that may indicate substances banned from flight. The use of radiation raises safety concerns. The TSA has repeatedly stated that the devices meet Federal standards for safety. Others have questioned this, asking that the data used to certify the devices be released.

The TSA has said that these systems are critical to their mission. For such systems, an independent agency should certify that the software in these systems works correctly, and be designed to shut down the system should the software detect any failures in itself or the system. But how the agency performs the analysis and testing leading to certification is critical to understanding the certification.

Failure of Certified Systems

Certification does not mean the system works correctly, or even that it actually meets all the requirements to be certified. Electronic voting systems are an example of this. Software for electronic voting systems must be certified according to standards dictated by the state in which those systems are used. Most states use the voluntary federal standards. Yet systems certified to meet those standards continue to exhibit problems. For example, a new audit procedure conducted after an election in 2008 in Humboldt County, California revealed a bug in the vote counting system that resulted in 197 votes from one precinct not being counted. Premier Election Solutions, the vendor, acknowledged a problem with its software caused the votes to be dropped. [1]

This was not an isolated instance. Independent evaluations of electronic voting systems have found numerous deficiencies that adversaries could exploit. Some were possible only when people running the systems did not follow given procedures, which—given the fallible nature of human beings—is a possibility that must be considered. Others simply required attackers to exploit flaws in the systems. Still others were a result of the configuration of the systems after they were installed; they were not properly locked down, even when the manufacturer’s directions were followed. For a detailed discussion, see the reports from the California Top-to-Bottom Review and the Ohio Project EVEREST study, among others.

The point is that software and systems certified by federally accredited testing laboratories failed to provide the security, accuracy, and protection required to perform the functions for which the systems were designed. People not involved in the certification process discovered these deficiencies after the systems were certified.

AIT Scanners and Certification

Now relate this to the AIT scanners. There is little to no detail available on the software test procedures, the source code analysis procedures, and indeed on any penetration tests in which the goal of the testers is to subvert the software to (for example) fail to provide proper interlocking, to deliver a dangerously high dose of radiation, or to enable images to be stored or transmitted.

The concerns about safety reflect past problems with medical software. The well-known Therac-25 accidents are archetypal [2]. A study in 2001 identified 383 software-related medical device failures that, fortunately, caused no deaths or serious injuries [3]. More recently, an error in resetting a CT machine exposed patients to much higher doses of radiation than expected [4], and in 2010 an FDA study of recalled infusion pumps identified software defects as one of the most common types of reported problems [5]. In the past 25 years, health-critical software has failed repeatedly. What evidence is there that the software in the AIT scanners cannot fail in similar fashion?

The claim that storing images is impossible raises two issues. Some models used by the U. S. Marshals Service have done exactly that [6]. TSA states that in test mode, the systems can store, export, and print images but that the TSA officers at the airport cannot place the machines in test mode [7]. Does the software prevent this—and if so, how? Can someone else put the machines into test mode? Is the software implementing the test mode active, although unused? If so, could the software be instructed to turn on those parts of test mode that allow the images to be stored, for example by malware? How are the procedural controls that prevent TSA employees and other from taking pictures of the display implemented? What degree of assurance exists that violations of these controls would be detected?

The TSA web site has testing reports about the advanced imaging technology (AIT) in use. The report from the Johns Hopkins University Applied Physics Laboratory report notes several places where software is used, but does not discuss the software validation or testing procedures, or whether the software itself was analyzed in detail. To be fair, portions of the report are blacked out, so some of this information may be there. The other public reports do not mention software.

In other contexts (specifically, electronic voting systems), vendors argued that making detailed analyses public would reveal details that would enable the compromise of the software or system. The Principle of Open Design, a fundamental security design principle, states that security should never depend solely on secrecy of a design. “Security through obscurity” is sometimes acceptable as a layer of defense, but never as the only defense. The software should be robust enough, and the procedures for using it thorough enough, so knowing how the software works will not help an attacker compromise it. If the concern is that the software is proprietary, a report could describe the nature of the analysis, tests, and results while relegating proprietary information to appendices that could be redacted.

The risk of using the scanners without releasing data about the software testing and validation prevents other scientists from evaluating the testing methodology and results. This brings into question the credibility of the effectiveness of the software that does the imaging, controls access to the images, and prevents their copying or further distribution. Let us hope the TSA deals with these questions quickly and openly.

References

  1. K. Zetter, “Serious Error in Diebold Voting Software Caused Lost Ballots in California County— Update,” Wired (Dec. 8, 2008); available at http://www.wired.com/threatlevel/2008/12/unique-election/
  2. N. Leveson and C. Turner, “An Investigation of the Therac-25 Accidents,” IEEE Computer 26(7) pp. 18–41 (July 1993).
  3. D. Wallace and D. Kuhn, “Failure Modes in Medical Device Software: An Analysis of 15 Years of Recall Data,” International Journal of Reliability, Quality & Safety Engineering 8(4) pp. 351–371 (Dec. 2001).
  4. C. Phend, “CT Safety Warnings Follow Radiation Overdose Accident,” MedPage Today (Oct. 15, 2009); available at http://www.medpagetoday.com/Radiology/DiagnosticRadiology/16455
  5. —, Infusion Pump Improvement Initiative, Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD 20993; available at http://www.fda.gov/downloads/MedicalDevices/ProductsandMedicalProcedures//parGeneralHospitalDevicesandSupplies/InfusionPumps/UCM206189.pdf
  6. D. McCullagh, “Feds Admit Storing Checkpoint Body Scan Images,” CNET News (Aug. 4, 2010); available at http://news.cnet.com/8301-31921_3-20012583-281.html
  7. Letter from G. Rossides, Acting Administrator, TSA to Congressman B. Thompson, Chairman, Committee on Homeland Security, U. S. House of Representatives (Feb. 24, 2010); available at http://epic.org/privacy/airtravel/backscatter/TSA_Reply_House.pdf

4 comments:

  1. Nice Post.Thanks for Sharing this in your Blog

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. If you have such clear proofs as those that you described, from hr then making a decision becomes very simple and logical.

    ReplyDelete
  4. I did not think that such scanners use quite complex systems that you described and were able to explain to us in detail and provide detailed information.

    ReplyDelete