Home » Featured

Forensic Statistics

1 September 2015 2,634 views No Comment
Christopher Saunders, Assistant Professor at South Dakota State University

Saunders, originally from California, is an assistant professor of mathematics and statistics at South Dakota State University. In 2006, he was recruited to do work for the FBI in pattern recognition and handwriting identification and spent the next two years as an intelligence community postdoctoral research fellow at George Mason University. Visit South Dakota State University website for details about his work.

Forensic statistics, as the name suggests, is fundamentally the application of statistics to forensics problems. Over the past 100 years, a number of distinct approaches have been developed for quantifying forensic evidence that favor statisticians with strong backgrounds in mathematical statistics and statistical pattern recognition.

The Different Paths in Forensic Statistics

While working on this article for Amstat News, I was attending a session on forensic statistics at the University of Salzburg and asked two of the speakers to provide short narratives about their experiences working in forensic science. The first is Jeanette Leegwater, a forensic scientist at the Netherlands Forensic Institute who is pursuing her PhD at the intersection of statistics and forensic science. The second is Mark Lancaster, an assistant professor at Northern Kentucky University. He has served as a program manager for various research enterprises in forensic science for the federal government.

Jeanette Leegwater

My career in forensic science grew out of a childhood love of numbers, puzzles, and solving challenges. After studying biology and completing a bachelor’s degree in mathematics at the University of Amsterdam (UvA), I was drawn to forensic science. More than the other sciences, forensic science posed questions that challenged my problem-solving abilities; these were puzzles worthy of and needing to be solved. Furthermore, it is an exciting area of expertise, in which you encounter a broad range of disciplines, and I could apply all of my previous scientific knowledge.

During my time in the master’s of forensic science program at the UvA, I had an interest in forensic statistics where the interpretation of evidence using a probabilistic framework is central. This motivated me to do a reading course with Marjan Sjerps, a forensic statistician at the Netherlands Forensic Institute (NFI). We studied Bayesian Networks, which can provide insight into the probabilistic reasoning of a forensic statistician using a graphical model. I did an internship at the Latent Fingerprint Department of the NFI, where I measured ridge density in fingermarks and studied its use in forensic casework. Here, I began to develop the programming skill set necessary to computationally measure the ridge density of a large number of prints. After the internship, I successfully applied for a position as a scientific researcher within the latent print department of the NFI.

Besides blinding case files and assuring quality in case work, the main goal of my work is to objectify the comparison of fingerprints using modern statistical methods. Currently, the fingerprint experts at the NFI are estimating the evidential value of a fingerprint comparison using their knowledge and expertise, and it is essential that their statements are supported by objective and statistically rigorous research.

Mark Lancaster

My introduction to forensic science was through the Intelligence Community Postdoctoral Program, where I worked on issues involving handwriting recognition in the context of questioned documents at George Mason University. This work later led to a short career as a federal employee program manager during which I worked on solving similar statistical problems in the nation’s best interest.

I credit my successes in forensic science research primarily to the university consulting experiences I had while learning mathematical statistics at the University of Kentucky and applying what I learned from those experiences at George Mason University. As John Tukey said, “The best thing about being a statistician is that you get to play in everyone’s backyard.” Of course, it is nice to be invited to play, and even better to be allowed to bring over more friends. Without a strong foundation in learning how to help other scientists with their statistical analyses, “playtime” in forensics would be limited.

As the questioned document research team at George Mason gained understanding in how statistics had been applied to forensics, we found there were many ways to improve on the analyses were being used. We also found that the underlying theory behind some of the calculations was incomplete and further research would be needed to justify them. By being able to explain these issues to practitioners of forensic science, our “playtime” and access to data increase. Only by carefully uncovering and explaining these underlying issues have we been able to make progress in improving the application of statistics to forensic science. Based on these successes, a new group of graduate-level statistical-forensic-science researchers is being educated by Christopher Saunders and Cedric Neumann at South Dakota State University.

Returning to academia at Northern Kentucky University, I am delighted to restart research buy ativan with cod that applies statistics to forensic science. There is still much work to be done in understanding and statistically characterizing trace evidence, including creating systems that process various forms of evidence to assist forensic examiners and statistical forensic scientists. With my dearest and best friends, I am introducing new undergraduates to these issues, hoping to continue the creation of more statistical forensic scientists.

Forensic statistics wasn’t a subfield of statistics about which I was aware during my graduate school years. It wasn’t until Donald Gantz from George Mason University contacted me to discuss a project focused on the “interpretation and presentation” of the results from a computationally complex statistical-pattern-recognition system that I became aware of the field.

The handwriting identification system, developed by a group led by Gantz and Mark Walsh of Sciometrics, was designed to determine the writer of a handwritten note. After I grasped what the problem required, I was excited to realize forensic statistics was an area in which I could use everything I had learned—from asymptotics to subjective Bayesian methods to computational methods and even experimental design—in a seamless whole.

Out of my discussions with Gantz regarding the methods for approaching the handwriting problem, I was offered an Intelligence Community Postdoctoral Research Fellowship to work with him on providing statistical support to the FBI and broader intelligence community. This fellowship is where I received my training in forensic evidence interpretation, machine learning, and statistical pattern recognition. My mentors throughout have been JoAnn Buscaglia of the FBI Laboratory Division, John Miller at George Mason University, Kathi Taylor, and Gantz.

During my fellowship, I found statistics applicable to many topics within the forensics field, such as estimating the accuracy of new forensic technologies for handwriting identification and trying to define what forensic scientists mean by “individuality.” Toward the end of the fellowship, I started to consider myself a forensic statistician specializing in forensic evidence interpretation related to the identity of source.

Forensic evidence interpretation is a specialized area of forensic science related to summarizing or quantifying the value of forensic science to decisionmakers, so that they can make a selection between competing propositions for how the forensic evidence has arisen. There are a number of renowned statisticians who have worked in this area over the years; however, there is still a distinct divide on how forensic scientists think they should present forensic evidence, with most falling into one of two camps. On one hand, a number of the evidence interpretation experts focus on using subjective Bayesian inference as proposed by Dennis Lindley in his paper “On a Problem in Forensic Science.” The second major group uses a fusion of Neymann-Pearson and Fisherian-like “p-values” in a process known as the “two-stage process.”

The role I have fallen into as a forensic statistician has mainly been to help forensic scientists choose and/or develop methods for presenting and interpreting forensic evidence in a manner that is both statistically rigorous and has a high degree of fidelity to the manner in which the forensic scientist chooses to think about the forensic identification process. I have found there is a major lack of formally developed statistical methods for interpreting and presenting the complex evidence forms that have become commonplace in forensic science. This lack of formal statistical methods has led to a proliferation of ad hoc methods for evidence interpretation, which, in turn, has led to a large number of open statistical research questions.

While helping in this work, I tend to rely on my training in the different statistical paradigms related to hypothesis testing and model selection. A secondary role I have taken on has been to provide estimates of the performance of an evidence-interpretation method in a given population. This usually takes the form of estimating the rates of misleading evidence in favor of the prosecution and defense for Bayesian or likelihood-based approaches.

My recommendations for a statistical education for someone with the goal of becoming a forensic evidence interpretation expert is to focus on mathematical statistics (including both Bayesian and frequentist methods), computational statistics (it is usually computationally difficult to calculate the statistics necessary to interpret forensic evidence), and statistical pattern recognition. Sampling theory is also incredibly important in building the reference databases needed to assess the performance of the methods in different populations.

All in all, I have found the statistics associated with forensic evidence interpretation to be one of the most enjoyable activities I have done as a statistician, mainly because it is an area in which I can apply multiple statistical sub-disciplines.

For information, Ian W. Evett wrote about the two-stage process, but is now one of the strongest and most elegant proponents of the subjective Bayesian approaches to forensic evidence interpretation.

The views expressed here are those of the author, and not necessarily those of the Federal Bureau of Investigation.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.