Home » Columns, Science Policy

Weak Forensic Science Has High Cost

1 March 2010 26,531 views 2 Comments
This month’s guest columnist, building on the themes of recent National Research Council reports, makes a strong case for more rigor and statistics in forensic science. Calling for a new agency to lead the way, Spiegelman urges fellow statisticians to help realize the changes many deem necessary.
~Steve Pierson, ASA Director of Science Policy, pierson@amstat.org

Contributing Editor
Clifford Spiegelman is a distinguished professor of statistics at Texas A&M, where he has been on faculty for 23 years. He is also a senior research scientist at the Texas Transportation Institute. His applied research interests include chemometrics, transportation statistics, environmetrics, and statistical forensics.

Forensic science is perceived by many as the “magic bullet” that links evil deeds to specific people. This is the story line of movies, television shows, and books. Few forensic examiners or technicians in these shows make errors, and whenthey are made, they are caused by ‘rogue’ forensic examiners or over-zealous technicians who do not follow accepted procedures.

In the real world, forensic science is used to determine occurrences and reconstruct crimes. It is used to identify suspects and possible crime scenes and to eliminate others. It also is used in criminal trials and appeals. Key to forensic science’s use in the real world is the confidence that law enforcement, the judicial system, and society at large place in it.

As currently constructed, however, the practice of forensic science should largely get a no-confidence vote, with the possible exception of DNA evidence (even though the scientific community has yet to be allowed access to the DNA database.) Indeed, Michael Saks and Jonathan Kohler, in a 2005 Science review article, name forensic science testing errors as a contributing factor in 63% of the wrongful convictions in 86 DNA cases studied by the Innocence Project. False or misleading statements by forensic examiners contributed to 27% of the false convictions studied.

The painful truth is that nearly all forensic procedures have been developed without much involvement from the statistical community or enough involvement from the independent, university-based scientific community or federal research labs.

As a result, forensic results are typically stated with uncertainty statements that cannot be supported. For example, it is typical in firearm toolmark identifications to state that, to a practical certainty, the defendant’s gun fired the bullets found in a decedent. Two recent National Research Council (NRC) reports (Strengthening Forensic Science in the United States:A Path Forward and Ballistic Imaging) conclude there is no statistical foundation for such an absolute statement. Also, some federal and state jurisdictions recently ruled that firearm toolmark examiners may only testify that it is more likely than not that the defendant’s gun fired the bullets found in a decedent. (See State of Ohio v. Anderson (pdf) and U.S. v. GLYNN.) That is, the courts require only a better than 50-50 chance of a match.

The broader scientific community has noted the blatant failures of forensic science, but the justice system has not paid careful enough attention. In a December 3, 2003, editorial, Science editor-in chief Donald Kennedy wrote, “It’s not that fingerprint analysis is unreliable. The problem, rather, is that its reliability is unverified either by statistical models of fingerprint variation or by consistent data on error rates. Nor does the problem with forensic methods end there. The use of hair samples in identification and the analysis of bullet markings exemplify kinds of ‘scientific’ evidence whose reliability may be exaggerated when presented to a jury.” Other points of view, mostly from within the forensic science community, are more sympathetic to the current state of forensic science.

I and several colleagues recently showed that the House Select Committee on Assassinations’ (HSCA) compositional bullet lead analysis (CBLA) study in the JFK investigation was seriously flawed. The two other assassinations studied by the HSCA were those of Martin Luther King Jr. and Sen. Robert F. Kennedy. I have looked at the forensic aspects of both investigations. In the Martin Luther King Jr. investigation, forensic science likely could have provided more help, and the issue can still be addressed. The practice of firearm toolmark examiners then (and now for the most part) was to attribute a weapon to a crime scene to a practical certainty. Other possible decisions by the examiners are that the weapon was certainly not involved in the crime (a finding that is rarely made if the brand of weapon is a possibility) or that the findings are inconclusive.

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 5.00 out of 5)
Loading...

2 Comments »

  • William Thompson said:

    I agree that forensic scientists should characterize their findings in a quantitative manner, but I don’t understand what the term “chance of a match” means. If the “chance of match” is 80% does that mean there is an 80% chance that the matching items have a common source? (If so, how would a forensic scientist be able to make that assessment without considering evidence beyond the forensic analysis?) Or does it mean there is an 80% chance of a match IF the matching items have (or do not have?) a common source? The latter interpretation seems more tenable, but will members of a jury understand it?

  • cliff spiegelman said:

    The chance of a match here is to to a common source. The toolmark examiner in these court cases has already declared a match, by methods that are not based upon verifiable statistical foundations (as stated by the NRC). The standard testimony is that the match is a practical certainty to a single weapon. Some courts have modified that to a better than 50% chance (and I believe they mean to the alleged murder weapon.) A proper error statement should have verifiable error rates. One error rate for matches that should have been found that were not. Another for matches that are declared but are really not matches. Two NRC panels found that there is no basis for claiming a bullet matches to a unique weapon. A suggestion is that an estimate be made of the number of weapons that could match toolmarks on a bullet. This can be done by sampling, by modeling the manufacturing, and wear process, and by searching computer records of toolmark images. Much as the NRC committee did for compositional bullet lead analysis (CBLA) a match should be stated to a class of weapons (or for CBLA bullets with common or coincidentally similar sources.) With research a reasonable upper bound to this number may be estimated. It will depend upon weapon, ammunition used etc. Thanks for your question