Home » Featured

Survey Research Scientists Tell Us What It Takes to Design a Survey

1 September 2015 2,220 views One Comment
We wanted to know what it takes to develop and design a survey, so we interviewed Mario Callegaro and Gordon Willis, two research scientists. Both men will be featured speakers at the International Conference on Questionnaire Design, Development, Evaluation, and Testing, which will take place in Miami, Florida, in November 2016. Visit QDET2 conference website for information.

 

Mario Callegaro

Mario Callegaro is senior survey research scientist at Google UK, London, in the quantitative marketing team. He works on web and telephone surveys focusing on survey and market research projects, including measuring advertisers’ customer satisfaction. He also consults with numerous internal teams regarding survey design, sampling, questionnaire design, and online survey programming and implementation. 

Can you briefly explain your research background?

I started my career with a bachelor of arts in sociology at the University of Trento, Italy. During that time, I helped open a small data archive, assisting students to locate and access survey data sets. What really jump-started my research career was working as an interviewer supervisor for a computer-assisted telephone interviewing (CATI) survey we did in collaboration with the University of California, Berkeley. The results were published in The Outsider: Prejudice and Politics in Italy. There were many embedded question-wording experiments that made me think a lot about experimental design and question-wording effects.

Further Reading
Belli, R. F., and M. Callegaro. 2009. A theory and history of calendar methods. In R. F. Belli, F. P. Stafford, and D. Alwin (Eds.), Calendar and time diary methods in life course research (pp. 31–54). Newbury Park, CA: Sage.

Converse, J. M., and S. Presser. 1986. Survey questions: Handcrafting the standardized questionnaire. Thousand Oaks, CA: Sage.

Sniderman, P. M., P. Peri, R. J. P. Figureido Jr., and T. Piazza. 2000. The Outsider: Prejudice and politics in Italy. Princeton, NJ: Princeton University Press.

Sudman, S., and N. M. Bradburn. 1982. Asking questions: The definitive guide to questionnaire design—for market research, political polls, and social and health questionnaires. San Francisco, CA: Jossey Bass.

After my BA, I worked for the department of sociology and social research at the University of Trento, still running CATI studies, designing surveys, and analyzing data. I then moved to the University of Nebraska-Lincoln, after I was accepted into the newly started survey research and methodology (SRAM) master’s program. My internship at the cognitive interviewing lab of the National Center for Health Statistics (NCHS)—working with Barbara Wilson, Paul Beatty, and Kristen Miller—was another key research event, as that is where I learned firsthand about how respondents interpret closed-ended questions.

Together with another student, I was part of the first PhD cohort from the SRAM program. I had the opportunity to work with Robert Belli on event history calendars, comparing this to standardized survey interviews. Event history calendars are an effective interviewing technique that can help respondents remember and reconstruct events better than standardized question wording. Incidentally, we also found something that still amazes me: The same technique is used in other fields with different names, very often with minimum overlap in citations. (See Event History Calendar.)

My first job after getting my PhD was as survey research scientist for Knowledge Networks (KN), now GFK-Knowledge Networks, a company running a probability-based online panel representative of the U.S. population. At KN, my research agenda changed a lot, this time focusing on online panels, web surveys, and address-based sampling (ABS). Together with my manager, Charles DiSogra, we moved the company recruitment method from random digit dialing (RDD) to ABS mail recruitment.

Two years later, Google Mountain View was looking for a survey research scientist for its quantitative marketing team and approached me via LinkedIn. I answered their email and here I am, six years later, still at Google but now working from the London office. My research agenda has shifted to measuring customer satisfaction and topics related to market research. However, I am still working on survey quality in different areas, including online panels and web surveys.

How did you get into survey methodology?

My passion for survey methodology started when I took a class on research methods during my BA in sociology. I loved it so much that I took the same course twice (something you could do at the time in Italy), the second time as a tutored seminar focusing on questionnaire design only. I still remember reading Asking Questions by Sudman and Bradburn cover to cover, and then Survey Questions: Handcrafting the Standardized Questionnaire by Converse and Presser.

Not content with this, I went to the Michigan Summer Institute in Survey Research Techniques, where I took one of the first classes on cognitive aspects of survey methodology (CASM) taught by Norbert Schwartz and Robert Belli. I came back from Michigan with a suitcase full of books and CASM papers. That was the basis of my BA dissertation, and I have not detoured from survey methodology since.

Did you have a mentor? If so, what was the most effective advice he/she gave you?

In addition to my PhD supervisor, Robert Belli, I did have very influential mentors in my career, but the most influential remains Allan McCutcheon of the University of Nebraska, Lincoln. I am doing this interview basically because of Allan. Allan has taught latent class analysis at the Essex Summer School in Survey Research in the United Kingdom for more than 20 years. When I was still working for the department of sociology and social research at the University of Trento, I had a chance to attend the Essex Summer School and talk to Allan about my career. He made me aware of the Nebraska SRAM program (it was new at the time) and strongly encouraged me to apply. Allan made an arrangement with the Gallup organization for some assistantships. Without these research assistantships, I could not have afforded the master’s and PhD programs, so Gallup is another influential entity in my career. Attending Nebraska was the most effective advice he gave me, and the second was to continue studying for the PhD and not stopping at the master’s level.

What interested you about your current position at Google?

I still remember the email from Google via LinkedIn. I was shocked that they were looking for a survey research scientist. This was the same job title I had at Knowledge Networks! To be honest with you, I wasn’t aware Google was interested in surveys at the time, but it quickly became clear that a company like Google needs survey research expertise to collect feedback from users and clients and improve its products. It also became clear that Google was running many market research projects and my expertise was valued. Finally, what interested me was the opportunity to work for an international, influential company that was helping shape technology in the 21st century. Once I was hired, their mission and environment were engaging and exciting, and this has kept me motivated to continue working with Google.

What motivated you to get involved with QDET2?

I was excited when I heard about the QDET2 conference because it reminded me of the incredible opportunity I had to attend the first QDET conference in Charleston, South Carolina, in 2002. At the time, I was still a student in the SRAM program and I applied for a fellowship to attend the conference. I recall the stressful moments preparing the application letter and waiting for the outcome of that request. I was accepted, together with a great group of international students, some of whom I am still in touch with.

In the current survey context, where low response rates are driving attention to improving survey participation, it is important to ensure equal attention is given to careful question design, a pillar of survey quality. Even though the efforts of the CASM and QDET meetings greatly helped advance question design and pretesting knowledge, there is still room for discussion about the best techniques for question evaluation and how to integrate information from different pretesting techniques. Continuing work on this area is crucial. In addition, challenges for question design have arisen from the adoption of new modes and devices for data collection; the QDET conference will be a fantastic opportunity to get together and discuss the research conducted in the last decade.

What are some of the challenges in designing survey instruments to collect “good data”?

Web surveys are becoming more and more popular, and their usage increases every year, with a parallel decline in other methods of data collection. Technological changes have made us lose control of what device respondents use to answer web surveys; it can now be a desktop/laptop, a tablet, a smartphone, or other Internet-enabled devices. Designing a web instrument that does not create “device effects” is challenging because this does not only involve the technology used, but also how a question is designed and how it needs to be evaluated and tested. Another challenge is that, in emerging markets, web surveys are mostly done using smartphones, as these have a higher penetration than laptops/desktops or tablet computers.

Do you have funny examples of bad survey questions?

I have some funny examples from a survey conducted at the event Burning Man. In a paper questionnaire administered in 2009, the instruction says: “If you have a short attention span, please focus on filling out the front and back pages.”

I also found two survey questions asking about la playa (the beach) very funny:

1) Have you ever embarked on a romantic relationship with someone you met at la playa?

[ ] No

[ ] Working on it

[ ] Yes

2) Were you married on la playa?

[ ] No

[ ] Yes, in a pretend wedding

[ ] Yes, in a real wedding

Not all surveys are designed by survey scientists. What tips can you give to people outside the profession for creating better survey instruments?

The first tip I give non-survey scientists is to try to write down an answer to these three questions: What concepts do you want to measure? What is your target population? Do you want to conduct a quantitative (e.g., survey) or qualitative (e.g., in-depth interviews, focus groups) study, or both? You will quickly realize that answering these three questions is not easy. And if you’re working in a team, very likely there will be many disagreements about how to answer these questions.

The good news is there are good textbooks nowadays that can guide people outside the profession in designing a good survey, together with more online training. Further, because of the new programs in survey research, more students are trained in survey research and can help design surveys in their job place.

What does the future hold for questionnaire designers?

First, questionnaire design is tied more and more to technology. Second, data-collection methods are moving to self-administered questionnaires. These two interrelated trends govern the future, as questionnaire designers must develop instruments that can be properly completed by the respondent unaided by an interviewer and on a variety of devices. Questionnaires will also be increasingly augmented by the passive data collection of respondents’ behaviors. This trend holds lots of promising research and insights for questionnaire designers. Once the quality of passive data collection is up to acceptable standards, I envision questionnaires asking less about past behaviors and more about attitudes and opinions to understand the “why” for which behavioral data cannot provide an answer.

What advice can you give to someone who would like to pursue a career as a survey methodologist?

There are excellent programs in survey research. Until now, these have been found mostly in the United States, but new programs have started to appear in Europe, as well. A two-year master’s-level program can provide lots of knowledge and skills to work in the industry. However, given the new promises Big Data and social media research offer, I would suggest students augment their training by studying social informatics classes and database management—and by learning some programming languages—as the new data sets will be a combination of survey data, paradata, Big Data, and possible social media signals.

 

Gordon Willis

Gordon2Gordon Willis has worked for more than 25 years to develop and evaluate methods in questionnaire design, cognitive interviewing, and survey pretesting. He is a cognitive psychologist at the National Cancer Institute and National Institutes of Health. Previously, he worked at both the Research Triangle Institute and the National Center for Health Statistics/U.S. Centers for Disease Control and Prevention. He attended Oberlin College and Northwestern University.

Can you briefly explain your research background?

I began my research career in graduate school at Northwestern, back in the old days of traditional experimental psychology, before the wave of cognitive psychology really hit. Until his retirement, I worked under Benton Underwood, who was a giant in the field of psychological research throughout the ’60s and ’70s. So I feel very much like I came from the waning stage of a previous era, where the theories, equipment, and research procedures now seem remarkably antiquated. However, the underlying logic of designing studies and conducting research hasn’t changed at all, so those old-school ways presented a good grounding in research methodology. I then migrated into the field of early childhood mathematics education research—primarily because my adviser’s spouse was an education professor who needed an assistant—where I learned how to conduct intensive qualitative research, consisting of cognitive interviewing of young children on how they solved math word problems. I leveraged that into my first government job at the National Center for Health Statistics, where I got into the business of studying response error in questionnaires in a big way within the first U.S. federal cognitive lab devoted to developing and testing survey questionnaires. It was only at that point that I dealt with surveys at all, and have developed that interest further, first at Research Triangle Institute International and then at the National Cancer Institute within NIH. Mine is an example of an evolving research background that hits different substantive areas as one goes through a professional life—although the common theme has been the development and evaluation of new methods.

How did you get into survey methodology?

I’m not sure if I should freely admit this. It was unintended, or at least unplanned. I was finishing grad student and looking for a job—and my wife, who was also a psych graduate student—noticed an advertisement in the American Psychological Association Monitor for a cognitive psychologist at NCHS, where the focus was on conducting intensive interviews to study how people answer survey questions. I don’t know how she picked up on the connection, but she recognized that the interviewing I had done of children was essentially the same thing, just applied differently. I decided to follow up on her ingenious insight, the difficulty being that I, in fact, knew virtually nothing about surveys. But I was a good student who knew how to learn quickly, so right before my NCHS interview, I went to the library and read up on survey methods and questionnaire design. At my interview, I was fully prepped to spout back all of that recently obtained knowledge, and I got the job.

Fortunately, my wife was correct about the transferability of skills, so I didn’t feel like a stranger in a strange land. The only strangeness is the seemingly nonlinear way in which I ended up where I am. But I’m convinced that much of life is unplanned and unintended—and when the football bounces to you, pick it up, because you’re now a running back.

Did you have a mentor? If so, what was the most effective advice he/she gave you?

Early in my career at NCHS, Monroe Sirken very effectively mentored me, especially concerning how to fit survey research into the special world consisting of the federal government. Monroe showed that—contrary to a common assumption—it is possible to be innovative, and even a leader, in survey research within government. For one, he shepherded the development of the Questionnaire Design Research Laboratory at NCHS, based simply on the belief that this could have a tangible positive impact. I think the ensuing years have proven him correct. The most effective advice I got from Monroe wasn’t direct, but through observation. He was (and probably still is) forward thinking and always ready for some new methodological development—definitely a think-outside-the-box type. There were times when others would scoff at some of these ideas, but I saw he wasn’t all that concerned about naysayers—he had enough faith and confidence in his own ideas to stick with them. Sure enough, I have seen those ideas ultimately vindicated. So what I learned is to stick with what you believe, despite the headwinds, and your good ideas will persist down the road. Of course, they may only bear fruit after you are retired and gone, or well beyond the point that anyone remembers or cares it was your idea, but you just need to be okay with that.

What interested you about your current position at the National Institutes of Health?

I have always believed NIH to be an exceptional research entity—a model for how federal government can work effectively on behalf of its citizenry. I was drawn to the National Cancer Institute specifically because I got to know my former supervisor, Rachel Ballard, who was trained as a physician and in public health, but who was also incredibly open-minded and drawn to interdisciplinary approaches to population-based cancer prevention research. I decided the integration of questionnaire design into the world of cancer research was a compelling basis for a research position. Fifteen years later, I still believe that.

What motivated you to get involved with QDET2?

I could answer this in terms of the importance of advancing survey methods, as the field is at a methodological crossroads, or the natural fit for me concerning the heavy emphasis in QDET2 on questionnaire design and information collection methods and so on. But following Margaret Mead, I believe most any scientific event, enterprise, or advancement is really due to the vision and energy of a small set of individuals who make things happen. For QDET2, I would credit Amanda Wilmot at Westat, who has from the start provided the vision and motivation for the conference. I also find it incentivizing to work with an effective group of international researchers on the conference organizing committee. We have Paul Beatty at the U.S. Census Bureau; Paul Kelley at Stats Canada; Jose-Luis Padilla at the University of Granada, Spain; Debbie Collins at NatCen in the UK; and Lyn Kaye at Statistics New Zealand. This is an interesting and eclectic group. The only downside is trying to arrange a conference call across the continents so someone doesn’t have to join at 3:00 in the morning!

What are some of the challenges in designing survey instruments that collect “good” data?

A key challenge is that resources are always stretched, and we need to attend to the problem of appropriate balance—in modern parlance, the key concepts are Total Survey Quality and Fitness for Use. All of us who work on surveys are to some extent subject to the “when all you have is a hammer, everything looks like a nail” phenomenon. Everyone has their favorite source of error—whether related to coverage, sampling, nonresponse, or measurement—and it is challenging to achieve the best mix. So what I face is the problem of deciding on the appropriate type and amount of attention paid to questionnaire design, pretesting, and evaluation. We could pretest forever to produce a (maybe) perfect instrument, but that does little good when the data were really needed five years ago—but we’re not in the field yet. We need to find something between a “minimal standard” consisting of the lowest bar one could get away with setting versus a set of “best practices” that may represent a shining example of optimal design, yet is utterly infeasible in practice.

Do you have any funny examples of bad survey questions?

I try to keep in mind that nothing is funny to the project director, client, or data user. They are stuck with the data produced, and in the extreme case in which a question is found to be completely flawed subsequent to data collection, I have seen where a researcher has chosen not to make use of their carefully collected information. But of course part of the fun in questionnaire design derives from an appreciation of the amazing sources of miscommunication that sometimes get through the design process, only to emerge later through some form of qualitative investigation. For example, the respondent who explained she was “bisexual” because “it’s just me and my husband.” Another sad but funny case is a question that seems fine on the surface, but ultimately proves to be “bad” because it doesn’t provide any useful information. Once after teaching a course, I was presented the results of a teaching evaluation item that had asked my students if I had “spent the right amount of time” on a number of topics. Note that I had no way of knowing whether those answering “no” wanted more … or less.

The most compelling examples of bad questions seem to come from the cross-cultural context, either because a translation is so terrible or because cultural differences render the question useless for some respondents. As an example of the former, the literal translation of “Are you feeling blue?” into Spanish produces a term (“azul”) that is precise and totally equivalent to the color blue, but that carries no connotation of mood. So the effect is no different that asking an English speaker whether he or she is now feeling orange or purple. The lesson is that our objective should be to translate meanings, rather than words.

Concerning a flaw due to cultural issues: For a project on racial/ethnic discrimination, I was involved in the cognitive testing of the item, “Were you treated unfairly at school because you are [self-reported race/ethnicity—e.g., Vietnamese]?” This seemed unremarkable until an immigrant who was asked the question responded simply “Uh, no… . I went to school in Vietnam…” So question flaws tend to develop because of limited imagination of the designers, who are thinking only of the cases in which the question makes sense (here, for those experiencing the U.S. school environment). These problems are always obvious—in retrospect—which is why it helps to test our items using real people.

Not all surveys are designed by survey scientists. What tips can you give to people outside the profession for creating better survey instruments?

Keep in mind that survey questions are not like everyday speech, because with standardized survey questions, we have no way to “repair” the conversation in the manner described by Grice back in 1975. The biggest downfall of survey items is usually vagueness—everyday terms and phrases can be interpreted in several ways, so try to think of those variants. My advice would be to at least ask several people—friends or colleagues—their interpretation of the question. For example, a colleague on a professional networking website recently attempted to gather some informal qualitative information by posing the question, “How do you feel before you go to work each day?” I pointed out that this could be interpreted in terms of (a) “How do I feel before I have to face my commute?” or (b) “How do I feel about my job when I begin working, once I am ready to do that?” He then changed the question to ask about how respondents feel before they “start work,” which hopefully works to narrow the range of interpretations.

What does the future hold for questionnaire designers?

One thought is that because the survey world is changing—quickly, and to some extent in ways that make our lives as survey professionals more difficult—we don’t have the luxury of worrying about question design, as we have to be focused on response rate and whether the sample survey can survive as opposed to going the way of the dodo bird. But I would argue that questionnaire design will be, if anything, even more important as the survey world evolves. There is certainly still a need for self-report-based information. Even though we may be able to make use of sensor-based devices or other technological developments, there are types of information that can only be obtained through self-report, such as attitudes and lifetime behavioral history. Plus, the continued development of complex forms of information collection on web-based instruments and mobile devices puts heavy demand on the designer to maximize usability, clear directions, and other issues of measurement error. To the extent that the old rules of question design even exist, these are becoming obsolete, and designers need to be very open to the cognitive and other demands on questionnaire design associated with our new devices and technologies. Overall, it’s a good time to be a questionnaire designer or survey methodologist generally for those who are ready to face an eclectic mix of new challenges.

What advice can you give to someone who would like to pursue a career as a survey methodologist?

Become a specialist in a particular area, rather than attempting to be a jack of all trades. Not at first, though. Start by learning about all phases of the survey data collection cycle and develop a notion of Total Survey Quality to appreciate the type of balance I mentioned above. But ultimately, I would follow the path you find most appealing, especially because the skills required for different aspects of survey measurement vary so much—from the very statistical/quantitative on one hand to the qualitative, communicative, and cognitive on the other. My own career has largely involved the niche area of cognitive testing, within the realm of pretesting, within the larger field of questionnaire design, which itself exists within the over-arching world of survey research. But I have found that specialization to have sustained me through the course of a varied and rewarding career.

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 4.00 out of 5)
Loading...

One Comment »

  • market research said:

    I think that is one of the so much important information for me.

    And i’m satisfied studying your article. However want
    to remark on few general things, The web site style is great, the articles is truly great
    : D. Excellent process, cheers