Back To Blog

On Research Podcast – Survey Research as a Method: Part 1

Season 1 – Episode 11 – Survey Research as a Method: Part 1

Survey research, a dynamic method utilizing carefully crafted questionnaires, transforms curiosity into actionable insights by systematically collecting quantitative data on diverse subjects. This powerful tool, applied in social sciences and business, illuminates trends and attitudes, serving as a compass for decision-makers to make informed choices. From gauging customer satisfaction to unraveling societal trends, surveys capture nuanced human perspectives, distilling complex data into comprehensible patterns and standing as a cornerstone in evidence-based decision-making.

 


Episode Transcript

Click to expand/collapse

 

Darren Gaddis: From CITI program, I’m Darren Gaddis, and this is On Research. Today, I spoke with Matt Jans, lead statistician for the National Health and Nutrition Examination Survey at the National Center for Health Statistics. As a reminder, this podcast is for educational and entertainment purposes only. It is not intended to provide legal advice or guidance. You should consult with your organization’s attorneys if you have questions or concerns about relevant laws and regulations discussed in this podcast. Additionally, the views expressed in this podcast are solely those of the guests. I do not represent the views of their employer.

Hi, Matt. Thank you for joining me today.

Matt Jans: Great to be here. Thanks for having me.

Darren Gaddis: To get us started today, what is your educational and professional background?

Matt Jans: I actually had a psychology major and a social minor in college. I did a master’s degree in developmental psych right after college, and then I worked for a few years in survey research, running surveys, developing questionnaires, drawing samples, things like that. I knew that I would want to get a PhD at some point, but I didn’t know what field I would want to get it in. While I was working in that job, this was at UMass Boston, I learned that there were PhDs, programs in survey methodology. So I ended up going back to school after about five years of work and getting my PhD in survey methodology from the University of Michigan.

Darren Gaddis: To help ground our conversation today, what is survey research and when should it be used?

Matt Jans: That’s a good question. Survey research means a lot of different things to different people, and it depends on where you learned it. So if you learned survey about survey research through statistics, you probably think of it as sampling and that part of an estimation and variances of statistics and things like that. And if you learned about it from the social science side or psychology or fields like that, you probably think of it more of questionnaires. But in a nutshell, our field of survey research, or survey methodology as we call it sometimes, actually has both parts. Every type of science has a measurement technique and a sample. But in survey methodology, we formalize those a little bit more. For example, for a survey methodologist, we don’t just have any measurement technique. We have usually a questionnaire or an interview protocol that has standardized questions. And I should back up a second and just say here, we’re talking mostly about quantitative surveys, not qualitative research, although some surveys that are mostly quantitative can have qualitative questions in them. So that’s the measurement part.

The other thing that survey methodologists usually mean when they talk about having a sample is that they mean that you’re drawing a sample where the probability of selection are known for each unit in a population and that the population is very defined. So in some sciences, you’ll hear people vaguely reference a general population or a broader population, but in operation, they don’t actually define their population. For example, they don’t say adults who lived in the United States as of July 1st, 2023, that kind of thing. The one caveat I should probably make early on too, and we struggled with this a little bit in the course and how to talk about different types of surveys, is that not all surveys are of people. Some surveys are of businesses or institutions. We usually call those establishment surveys. But for the most part, if you hear me say respondents or people, you can mentally fill in that those people could be a business or somebody responding on behalf of a business, or a family, or a group, or something like that.

Darren Gaddis: In survey research, is consent always required? And how does a researcher acquire consent from research participants?

Matt Jans: Yeah, that’s a really good point. And I’ll go out on a limb and say that that depends. You’ll hear a lot when we talk about survey research and survey methodology that almost every question has an it-depends answer to it. What it depends on is the amount of risk to the participant or the respondent that’s involved and the amount of freedom that the respondent has to not be engaged with the studies. And a lot of research studies that aren’t surveys, lab studies or things like that, you have populations of people that don’t have a lot of choice. It might be children, they might be students who are administered a questionnaire in a school setting, things like that. People in medical research that they hear about through their doctor, they don’t have as much choice of participation as the general population does when it comes to surveys.

In surveys, usually, we’re sending a letter to a home or a business. We’re calling people on the phone. We’re sending an email. So the people that we sample can always ignore those emails, phone calls, or knocks on their doors. And what you’ll see sometimes is IRBs or other human subjects approval bodies will exempt surveys depending on the amount of contact that you have with the people you’re trying to recruit and the questions that you’re asking. So if you’re not asking very sensitive questions, it’s possible that your survey might actually be exempt from having any amount of consent involved in it.

Actually, I should back up a second there. There’s really two types of consent. There’s formal or an active consent and there’s passive consent. So passive consent would be, as long as you don’t hang up the phone, I have your consent to ask you these questions. On the other end of the spectrum would be a formal consent where the interviewer says, “Do I have your consent to continue?” Sometimes it’s required that consent is then recorded in some way, with the most extreme version of recording being actually physically signing a form of some sort. We don’t usually see that in general population surveys just because it wouldn’t be practical. And again, the person could hang up the phone or close the door.

Now, all that aside, it is best practice to inform people. There’s the issue of recording consent and informing of consent. Good survey researchers will always inform respondents about the potential risk, the amount of time the questionnaire is going to take the topic, things like that, who they can contact if they have a problem or not. But surveys as a whole are conducted by a wide range of organizations, from government agencies, to universities, to polling companies, marketing companies. And the amount of consent and ethics oversight varies widely across those types of organizations.

Some companies that I hesitate to say for-profit companies, but the world is split into organizations that are governed by ethics groups like academics and government researchers and things like that. And then there’s a middle group of private sector and maybe public companies who follow those same guidelines but aren’t necessarily overseen by similar IRBs, although all the companies I’ve worked with have had IRBs of some sort. And then other kinds of companies that don’t have any kind of ethical oversight or IRB, anybody can buy a list of random numbers or walk around and knock on doors. So one of the problems our field faces is distinguishing unethical research from ethical research. Not to say that all those companies that are not covered by IRBs are doing things unethically. They just don’t have that same standard burden and approval burden in their research practice. I hope that helps answer the question.

Darren Gaddis: Absolutely. From your opinion, when you design a survey research project, what differentiates a good survey question from maybe a potentially bad survey question?

Matt Jans: Yeah, that’s a good question. So like I mentioned before, there’s always two parts of a survey. There’s the questions or sometimes we call them survey items. They might not be grammatically worded as a question, but they’re something that the respondent is supposed to answer or fill in some information about. And then there’s the sample. So in terms of good questions, usually survey questions will flow from some kind of research question. So like I just mentioned, not all surveys are done in a traditional academic research context. So there aren’t always research questions and hypotheses like you would see on an NIH proposal or grant. It might be more of a practical problem.

So maybe in your company, you’re an executive and you hear that your staff has a low morale right now, and so you have a research question that you want to know, A, is that true amongst all your staff? So you can sample systematically, get a representative sample and ask questions about morale. And B, what are the details of that morale? Is it due to long hours? Is it due to work demands and things like that? So once you’ve figured out what those research questions are, that’s always a good place to start with surveys, just like any other kind of research project. You can figure out what your survey questions should be. So when you get to the point of actually gathering survey questions that have already been written or writing your own questions, there’s a number of guidelines, and we handle this in the course in detail.

But some of the things that make a good question are asking about one concept at a time. It’s real easy to write what we call double-barreled questions where the question asks more than one thing in one sentence, one grammatic question. So an example of that would be, would you like to be rich and famous? You might want to be rich and not famous. You might want to be famous, but you don’t care if you’re rich. So how does a person answer that? So one concept at a time in every question is a good standard. Usually we don’t want questions to be hypothetical. People don’t do a great job of answering things that they would do in a situation or might do in certain situations. It depends, the caveat there is that sometimes hypothetical scenarios can be used for attitude measurement, but they have to be couched in that context to be useful.

For the next couple of points, I’m going to make a distinction between attitude questions and behavioral questions or experience questions. So think of these as questions about opinions or questions about facts and personal facts. For attitude questions, there’s a lot of research in the literature showing that they’re really susceptible to bias, for lack of a better word, due to the context, meaning where they’re put in a questionnaire, the questions that come before them or after them, how they’re worded, things like that. When you’re writing attitude questions, you want to make sure, you have to be very careful that they’re not leading the respondent in one way or the other.

One check on a good attitude question is whether it’s balanced, just literally balanced in the phrasing of the questions. If you were going to write a question asking about a particular opinion or attitude, you might say, “Some people think X, that’s opinion number one. Others think Y, that’s opinion number two.” So you’re just saying that this range of opinions are out there, which position better represents your opinion rather than just asking if people disagree or agree with a particular opinion or setting up a phrasing that would lead them to think that opinion was the right opinion to have. So that’s the kind of stuff to be careful with attitude questions.

For behavioral and experience questions, one of the biggest challenges or mistakes that I see in questionnaires that I look at is that it’s easy to ask about questions without thinking about whether the respondent has experienced the thing you want to know about first. So if they haven’t driven a car before, let’s say they haven’t driven a pickup truck, let’s say. You can’t ask them questions about what it feels like to drive a pickup truck or what they liked about the last time they drove a pickup truck. So you always want to check your questions for making sure that it’s something that respondents you’re going to ask have had an experience with.

That’s a little bit different for attitude questions because people may have a preformed attitude, but if they don’t, they can also develop that on the spot once they’re asked. But for behavior questions, if they haven’t had the experience, any answer they would give wouldn’t be worth too much. So we usually deal with that by skip patterns or filter questions. So you’ll ask a question about, have you ever driven a pickup truck first? And then the questions about your experiences the last time you drove a pickup truck.

Another challenge with factual and behavioral questions is figuring out what the appropriate timeframe and the question framing is for the cognitive demand of the question. So opinion and attitude questions usually aren’t too cognitively demanding unless they’re worded in a goofy way and the grammar’s unclear and they’re double barreled and things like that. But with behavioral questions asking about doctor visits that you’ve had or your income, things that are facts and experiences, is usually something that you should be able to answer this if you tried hard enough.

It’s really easy to ask people questions that are way too hard to answer. So for example, there’s a common question out there for tobacco use that asks, have you smoked at least a hundred cigarettes in your lifetime? And it might not be the perfect question. It implies that people have counted up to a hundred cigarettes, but in context it’s not really taken that way. You can make a guess about, “Yeah, I’ve smoked off and on in social situations,” or, “I used to smoke regularly when I was younger.” “Yeah, I’ve probably had a hundred cigarettes,” or “No, I’ve never smoked.” “I tried it once or twice. I haven’t had a hundred cigarettes.”

Some other examples would be asking how many times a person has gone to the doctor in this year, in the past calendar year, or since January. That’s one challenge that comes up. When you say in the past year, are you talking about a calendar year, since this time last year? Or are you talking since January of this year? But you can imagine for a person who goes to the doctor a lot, that’s a lot of doctor visits to count up. So you might decide to ask how many times have you gone in the last month or even did you go to the doctor in the last month and make it even an easier question.

All this to say there’s no one right question. It really is important that the questions you ask match your research questions. I’ve dealt with a lot of questionnaires where a client or a collaborator will come in with a research question and they’ll have some measures that kind of get at what they want, but as we work it through, it becomes pretty clear that their research questions are actually different than the questions they thought were good measures. So we end up revising the questions to match.

Darren Gaddis: Matt, thank you for joining me this month to discuss survey research, and I’m looking forward to having you back next month to continue talking with me about survey research.

Matt Jans: Yes, thanks for having me.

Darren Gaddis: Thank you for listening to today’s episode. Be sure to follow, like, and subscribe to On Research with CITI Program to stay in the know. If you enjoyed this podcast, you might also be interested in other podcasts from CITI Program, including On Campus and On Tech Ethics. Please visit CITI Program’s website to learn more about all of our offerings at www.citiprogram.org. I also invite you to review our content offerings regularly as we are continually adding new courses, subscriptions, and webinars that may be of interest to you, like CITI Program’s Survey Research: Design, Planning, Implementation, and Ethics course. All of our content is available to you anytime through organizational and individual subscriptions.

 


How to Listen and Subscribe to the Podcast

You can find On Research with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2112707.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guest

content contributor matt jans

Matt Jans, PhD – National Center for Health Statistics

Dr. Jans implements and innovates methods in web, mail, phone, and in-person surveys. His research includes questionnaire usability and pretesting, interviewer-respondent interaction, address-based sampling, and sexual orientation and gender identity (SOGI) measurement. He is Lead Statistician for the National Health and Nutrition Examination Survey (NHANES).

 


Meet the Host

Team Member darren gaddis

Darren Gaddis, Host, On Research Podcast – CITI Program

He is the host of the CITI Program’s higher education podcast. Mr. Gaddis received his BA from University of North Florida, MA from The George Washington University, and is currently a doctoral student at Florida State University.