It’s What They Say They Heard
Oh, how I vividly remember bumping into an assistant administrator of the Environmental Protection Agency in the elevator. He was a bit irked that he would be late for an all-day retreat of one of his major offices. Somewhat cynically, I suggested that if he arrived at the one-third point of the day and remarked, “Well, this all seems to boil down to a problem in communications,” he would probably be right on target. By the look on his face, he did not appear any less irked by my wisdom.
Several weeks later, when he happened to see me in the hallway, he told me that just before the lunch break at the retreat, one of his division directors stood up and opined that the problem seemed to be one of communications. And yes, this time he had a smile on his face.
I even noticed a Washington Post headline on December 27, 2016: Obama Blames Democrats’ November Defeat on Failure to Communicate Effectively. Why is it that lack of proper communications seems to hold up progress on all fronts and throughout all time?
I think the problem is particularly serious in our profession. In a data-driven analytic world, there seems to be more and more desire to present conclusions, suggestions, and recommendations based on statistical analysis. In the recent presidential election, we were pelted with survey results, each careful to mention the poll accuracy within plus or minus three percentage points.
Even in colloquial talk, one sees statistical intrusions. The weekend before the rather contentious presidential election, Peggy Noonan tried to calm the hassled electorate in a Wall Street Journal opinion column. She noted, “Someone is going to win Tuesday and then, if trendlines that have proved reliable in the past continue, the sun will come up on Wednesday.” She added humorously, “We claim this with a 3% margin of error.”
It may be counterintuitive, but I find the polling results accuracy statement sad and Noonan’s comment uplifting. Why? Because, in the polling, I would guess the 3% number is based on the sample size. What about nonsampling error—all the other things that may contribute to the error? Concerns such as randomness, representativeness, sampling frame, wording of questions, order of questions, and so forth would certainly increase the error range. So, I doubt the true accuracy, and hence the correct information from the survey, is being communicated properly. This is not even mentioning the communication problem in trying to explain how most polling efforts got the winner wrong!
And conversely, why do I like Noonan’s tongue-in-cheek remark? Because it shows she is cognizant that statistical reasoning must be employed, communicated, and reported accurately at some major points.
This creeps up again in the refrain familiar to every one of us who has ever introduced himself or herself as a statistician: “Oh, statistics. That was my worst course.” I used to be mildly amused by this truly predictable response. But it finally dawned on me that you rarely hear a similar remark concerning complex variables or atomic physics. I’m sure the public is not crazy about imaginary numbers, nor do they get the differences between fission, fusion, and confusion. So why does statistics take it on the chin?
Because of its importance to every facet of life, many more people are exposed to statistical principles. OK, they all may not like it, but at least they know life is subject to variability and uncertainty. Thus, we have an opportunity, and indeed an obligation, to properly communicate statistical concepts and the rudiments of statistical reasoning. And we must strive to do it so people understand the basic logic.
So why is all this of concern? Many of you have heard my mantra: “It’s not what we said, it’s not what they heard, it’s what they say they heard.” With our increased use of data, proper analysis is crucial. I certainly believe we have qualified statisticians who can do that. Then we must tell somebody what the analysis is all about—its aims, its methods, its shortcomings, its downsides. Again, we usually do this quite adequately. The next step is that the recipient of the information should hear what we are saying and hear it accurately. They may or may not ask pertinent questions. They may have other topics on their mind. Here, the statistician has an obligation to lend insight and try to ascertain if the message is getting through. This is the part I am not sure we always do well.
But it is the third element that is crucial: “It’s what they say they heard.” This is where the rubber hits the road. Somewhere, there is a policy maker, a decision maker, a judge, a jury, an elected official, a doctor who must properly integrate the results into real plans, real actions, judicial decisions, regulations, proper medications, and so forth.
This is a difficult task for statisticians. I have been there. As an expert statistical witness in a trial, you usually have just a few seconds to answer the loaded question of an adversarial attorney. You hope the judge or jury understands you and then, most importantly, they integrate it properly into their decision making.
I hesitate to add that this difficulty in communication might be exacerbated by our use of Twitter, Facebook, Instagram, etc., instead of full-fledged oral or written communication. Yes, I know this is old school. But I am concerned. While I am a true advocate of succinct explanations, I am not sure this can always be accomplished in 140 characters. Naturally, I am also concerned that we now seem to have a universe that allows alternative facts. If ever there were a time to describe our work effectively so as to integrate the true meaning into societal decisions, it is NOW.
I have given many talks in my career, and one of my main points has always been to encourage—even demand—that statisticians carefully review their raw data. I give examples of official data that are wrong. To lighten up a serious topic, I have for years shown a Dilbert cartoon in which Dilbert notes he didn’t have any accurate numbers so he just made up one. He further asserts that studies have shown that accurate numbers aren’t any more useful than the one you make up. Someone queries him as to how many studies have shown this and Dilbert answers, “87,” with absolute precision. Sadly, until this year, this was quite humorous.
So, what are we doing about all this? One of my initiatives is to make sure statistics are correctly giving the whole story. Here, the ASA is working with John Bailer and Richard Campbell at Miami University. John and his statistics colleagues have teamed up with Richard and his journalism counterparts to produce the series Stats + Stories. As John says, this is “the statistics behind the stories and the stories behind the statistics.” The idea, of course, is to tell the full story, accurately and forcefully, with the proper use of the statistical underpinnings.
I have had the pleasure of being interviewed for Stats + Stories in the context of environmental statistics. It was a terrific first-hand experience to learn the concerns and angles from both the statistical and journalistic sides of the table. To me, this goes a long way toward addressing the omnipresent communications problem.