Back To Blog

On Tech Ethics Podcast – Bots in Survey Research

Season 1 – Episode 6 – Bots in Survey Research

In this episode, we examine the influence of bots on online survey research, outlining the key aspects Institutional Review Board (IRB) members need to consider when evaluating relevant protocols, and suggesting proactive measures that IRB members can take to assist researchers in minimizing the potential impact of bots on their studies.

 


Episode Transcript

Click to expand/collapse

 

Daniel Smith: Welcome to On Tech Ethics with CITI Program. Our guest today is Myra Luna-Lucero, who is the research compliance director at Columbia University’s Teachers College. We are going to discuss the impact of bots on online survey research, what institutional review board members should look for when reviewing related protocols, and how IRB members can proactively help researchers mitigate the impact bots may have on their research. Before we get started, I want to quickly note that this podcast is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws and regulations that may be discussed in this podcast. In addition, the views expressed in this podcast are solely those of our guests. And on that note, I want to welcome Myra to the podcast.

Myra Luna-Lucero: Hi.

Daniel Smith: So before we get into today’s discussion, tell us a bit about yourself and what you currently focus on at Teachers College.

Myra Luna-Lucero: Yeah, I’d be happy to. So I am the research compliance director at my institution, and I think that’s a huge job in and of itself to manage just research with human subjects and the diversity of that topic. But I think I also offer a unique perspective because I did earn my doctorate and I did go through research as a researcher. So I have that dual perspective. Not only the challenges of trying to design a study from a theoretical basis, coming up with a hypothesis, making that initial content relevant for the field, but also now I understand so much more about research compliance that those two worlds have melded pretty well together. So I think it’s just a very dynamic experience to have the researcher mindset and then also the research compliance mindset.

Daniel Smith: Absolutely, and I think that’ll be perfect for our discussion today because we are going to talk about research, and then we’re also going to talk about what folks who work in research compliance like IRB members can do to help researchers mitigate the impacts of bots in their survey research. So on that note, before we get into all of that, can you briefly share with our audience what the definition of a bot is? I think we’ve all heard of bots in different contexts, but just for a little bit of grounding here, can you define them for us?

Myra Luna-Lucero: Sure. And I do want to clarify that I’m no expert in bots, but I am one of those individuals that because of the job that I have and because of my experiences, I’ve had to be very observant and diligent, and mindful about how bots impact research. So the content that I’m presenting to you is rooted in a lot of research, but it isn’t the end-all be-all definition. There’s just so much to think about when we think about ethics and technology. So I just wanted to make that clear that it is always going to be an ongoing learning process because of the diversity of technology. But for the purposes of this presentation, bots really are shorthand for software or internet robots. Now, some of the distinct characteristics of bots is that they’re a computer program that really are designed to simulate human activity. They operate really as an agent for a user or for another program. Bots are pervasive and not all bots are meant to harm, but they really are designed in a lot of ways to simulate that human activity and not be easily detected.

Daniel Smith: Got it. So when we’re talking about online survey research specifically, what impact can these bots have on the research?

Myra Luna-Lucero: It’s a fascinating question that you ask. The challenge is that the impact doesn’t necessarily have to be bad, and the impact doesn’t necessarily have to be good. It’s not necessarily this dichotomy, but it’s really just the researcher’s and the research compliance specialist diligence to understand what could happen if things go awry. There are really bots everywhere. We interact with bots probably more than we even consider. Even just basic chatbots that we may interact with like Amazon’s Alexa, et cetera, or social bots on social media, those shop bots that help us find the best prices to travel, it’s all of these things that we interact with all the time that sometimes desensitizes us to that impact that a bot may have. But it’s really just stopping and saying, “Okay, I am interacting with a bot. I recognize that this is happening. Is this going to negatively impact me, my data, my privacy, my confidentiality, or is this something a little bit more benign and helpful for me to get the answers and resources that I need?”

So it’s trying to take a beat and have a moment to think, is this okay for me and my life to interact with content that is bot-driven? Yes. Okay, then what other impacts do I have to think about that may not be good? Or am I concerned about these other risk factors and do I just need to stop and consider how my privacy, my private information, my confidentiality, in the researcher’s case, the study design could be situated with more safeguards? And so the impact really is gray in a lot of ways, but it is also just making sure that the individual interacting with the bot in whatever capacity is just taking that beat and being mindful of the consequences and the interactions that they’re having with that bot.

Daniel Smith: So to focus a bit, I guess, on some of the negatives of bots or the harms that they could cause, are there specific things that IRB members should be looking for when reviewing protocols involving online survey research?

Myra Luna-Lucero: Absolutely. I think the pandemic really put into central focus the necessity in a lot of ways to understand internet-mediated research. So research that is done on the internet or online research. Because we were dealing with so many impacts of the pandemic and pausing in-person data collection, there was this in a lot of ways, pivot that researchers made. “Oh, well, we’ll just go online.” And sometimes those decisions that they were making for internet-mediated research for online research were perceived as, well, it’ll be easier to engage online because, in the case of the pandemic, there was no in-person data collection during the worst of it.

But what was the challenge is, the switch, that pivot from, oh, we’ll just go online, not a lot of researchers who had been used to collecting data in person were prepared to handle online data collection. And so we as a research compliance specialists had to work with those researchers to really think through. It is not just a possibility in most cases to mirror exactly what you might do in person, which could be a lot more safeguards applied to one-on-one interaction, that isn’t necessarily possible in an online internet-mediated research project. It’s not to say that it can’t happen, but it’s not just that easy one-for-one pivot.

And so for a research compliance specialist to be able to have candid conversations with the researcher about data security, which is a term that researchers and research compliance specialists do say a lot. But what does that actually mean for the researcher? When they’re thinking about data security, is it just, oh, we’re not going to collect identifiers? Okay, well, what does that mean if you’re not collecting identifiers, but you’re planning a longitudinal study? How are you going to be able to answer those kinds of questions with the design that you’ve proposed? And so for us, we really had to create a diligence and bot identification plan. And it was a lot of figuring it out on the fly. We had some internet researchers who had been doing internet research long before the big influx of internet research because of the pandemic.

And so we had to go to some of those sort of more experienced researchers to understand, okay, you’ve been doing this for years. How has the pandemic now impacted your designs? But also having them teach us, the research compliance specialists, some of those long-term safeguards that could apply to now the masses amount of researchers who were pivoting to online data collection. And that really just comes down to being observant, having safeguards, and consulting others to try and design a study that is sound, but also that considers the risk that the design itself may have to bots. And for a research compliance specialist that’s walking the researcher through questions like, do you have attention checks? And the researchers may have never heard of that word. What is an attention check? An attention check is not only trying to assess that the survey taker or the online data collection participant is paying attention just to the flow of the study because the researcher is not one-on-one with that person. They’re out in the universe somewhere participating in the data collection in an online way.

So an attention check is checking with that participant, are you paying attention or are you just clicking through, but also, do those attention checks still remain vulnerable to be targets for masses amounts of bots. If the researcher is paying a participant, what are the ramifications of that payment if that becomes attractive to a computer designer who makes bots? These are the kinds of questions sometimes nuanced, sometimes more broad, like being observant, having safeguards, and consulting others. But some of those conversations really just made sense anchored in a data security plan. And that meant that every study that we reviewed not only before the pandemic but much more diligently after the pandemic or during the pandemic was making sure that per study there was a specific data security plan in addition to hitting those big markers of a checklist of a data security plan.

Daniel Smith: So in terms of the bot identification plan that you mentioned, is that something that you see researchers incorporate into their specific data security plans, or does it exist separately from those?

Myra Luna-Lucero: They absolutely incorporate it into their data security plan. And I think it’s still evolving. I can’t say that we have the end-all be-all answers to understanding internet-mediated research, but we really are taking big steps to be present with the researcher when they’re designing their content, but also trying to find the regulations and the ethical compass that help us make these decisions for the protection of human subjects. It’s the big stuff I was just mentioning, the being observant, building and safeguards, consulting others when the researcher is designing the content. But that also means the research compliance specialist is doing the same thing. We as research compliance specialists are relying on the regulations for the governance and the rules of what research with human subjects looks like. But we are also looking at it from an ethical perspective and sometimes just relying on that gut instinct or that question that a research compliance specialist may have. Researcher, you created this recruitment strategy, but do you think that’s going to leave you vulnerable to recruiting individuals that may not meet your eligibility criteria? So I think in that data security plan, it’s a mutual understanding.

So for researchers engaged in internet-mediated research, they are making the broad strokes of data security for recruitment, for data collection, for data storage, for observing internet activity, the confidentiality, and privacy, all of that’s weaved into the broad strokes of it. But we’re also asking about a compensation protocol. What happens if the compensation embedded in the online study impacts the kinds of potential participants that may not be eligible to participate in this study but do want the money? So how does that compensation plan in the strategy of recruiting legitimate participants and obviously wanting to compensate legitimate or participants that are eligible to be in the study, that’s important, but does that compensation plan leave the survey itself vulnerable to be attracted to bots or malicious engagement? Those conditional logic questions that can be embedded in the survey itself, are they too restricted to a participant that is eligible to be in the study? Is it now punishing a potential participant to be in the study for the safeguards of keeping bots out? These are the kinds of candid discussions that occur in internet research now for the research compliance specialist and the researcher.

So yes, to answer your question, it is the broad strokes of the data security plan, which will always be the anchor in any online data collection protocol. But it’s also these nuanced case-by-case engagements of research compliance specialists trying to understand what is going to be feasible to try and safeguard the participant who is eligible to be in the study while also trying to safeguard or consider possibilities for a bot to infiltrate that particular online survey. And a lot of it does have to lean on the researcher for that information because they are the experts in their content area. And we do ask them a lot of questions about their experience in internet-mediated research as well, and trying to find a balance between the two is important.

Daniel Smith: So you mentioned kind of leaning on the researcher a bit to provide the information. Are there any things that IRB members can do to kind of proactively help researchers in developing these security plans and their protocols to account for some of the harms that might result from bots in this type of research?

Myra Luna-Lucero: Absolutely. I highly recommend you become best friends with your IT specialist. I have joined our IT leadership group. So they’re much more focused on information technology for the institution, but I carve out a little piece for research with human subjects and technology related to human subjects research. But I have learned so much. I’m not an information technology specialist. I didn’t train in computer science. I don’t have the most cutting-edge knowledge of technology and how it impacts the world, but I can learn a lot and I can also rely on content experts who do know that information, and I think that’s really important. Research compliance specialists don’t have to feel lost on an island having to figure out all of this content their own, they can do what we’ve modeled potentially in our institution, which is this checkpoint for researchers.

The way that it works for us is a researcher will submit an IRB protocol for our review, an institutional review board protocol for our review. We will review it as we do, pre-review, ask the questions that we need to ask to clarify. And if it is content that is standard practice and we can have the pathway forward in a very straight linear way, then we’ll go that route. But oftentimes we have questions as we’re reviewing a protocol that relate to technology. And the relay system, the touchpoint that we’ve created with our tech office is that, in the instance where we as IRB reviewers have a question or concern about the technology which may exceed our content knowledge, we can tag an IT person and send the researcher there. The researcher then creates what we call a ticket or an inquiry ticket.

So the researcher reaches out to a tech expert at the institution, ask the questions about data security concerns that may again exceed our content knowledge because it is related specifically to a technology, software, or hardware. They get that feedback, they get that consultation from the IT office, and then now the researcher has that much more knowledge to bring to the protocol, and then we have that cross-reference. Will we make every protocol perfect and will we be able to cover every base? No, but there is some assurances to tag content experts, and there is some assurances to have representation from a research compliance specialist in the conversation of technology. And that just again, perpetuates just greater understanding and more, I think clarity for what the researchers are going to be dealing with when they put their survey or when they put their study design in an online context.

And that relay system, those touchpoints, consulting experts in those content specialties is just part of the diligence now for a research compliance specialist. And I would highly encourage anybody in this profession to find those allies to network with because it does make such a difference because they may not have the answer, and if they don’t have the answer, you can’t have the answer because they have content expertise that may exceed your knowledge. But being able to ask those tough questions and getting at some resolution is the really only pathway forward.

Daniel Smith: That’s great advice and an important perspective as well. So my final question for you today is: Are there any additional resources that our listeners can check out to help them better understand and navigate the issues that we have discussed today?

Myra Luna-Lucero: It’s a multipronged kind of pathway to gain information for a research compliance specialist reviewing a protocol. The first is consulting with and engaging with colleagues. Having those networks and those… I like to think of it as the role models or the websites that have the content that just makes a lot of sense, but maybe isn’t necessarily content that you have yet for yourself. And then reaching out to those research compliance specialists at that other institution and just sometimes even cold calling and say, “Hey, you have this great website. It seems like you found this information that’s really valuable, I’d like to talk to you about it,” and kind of just have that network. That’s a really good place to start because you can have the research compliance-research compliance conversation, that two specialists in that field that understand the regulations and understand the ethics and then talk through the, well, this worked for us, maybe this will work for you kind of conversation.

The second part of that outreach is really just spending time reading literature. It’s kind of that sometimes it’s hard to find the time as a research compliance specialist to just sit down and think through and read through what internet-mediated research is all about. But I highly encourage it because you at the end of the day, have that responsibility for making decisions for the protection of human subjects. And if you don’t know everything about the topic, it’s okay, but knowing the gist of what questions to ask when you’re trying to apply the compliance regulations and you’re trying to apply your own understanding of what’s happening and any ethical concerns that you may have, really come from an understanding of what the researcher is intending to do and how you can interpret the regulations for what they’re intending to do and ask the researcher that question.

The next is, I was saying, making those networks with content experts specific to the area at hand, and in this case, it would be technology. So finding that IT person that gives you the information that you need or at least has some pathway to get you towards the information you need so that you can really assess, okay, this research was creating this survey. It’s very restrictive in trying to keep the bots away, but it’s also going to potentially punish an eligible participant because it’s making so many hoops for that eligible participant to engage in that study. Is that the fair balance? Is that justice? Is that weighing the pros and cons of the ethical consideration in the study and mitigating that risk?

Being able to answer those questions is networking with experts who know compliance, your own understanding and conversations with the researcher, and your own due diligence about understanding the topic and then connecting with IT content experts as well. And I think that’s the three-pronged coalition or connection of everything is multiple sources to help you as a research compliance specialist make the necessary decisions and really setting a precedence for what research is going to look like at your institution, and then having that to the best of your ability due diligence in protecting that human subject as you’re assessing that protocol.

Daniel Smith: Thank you, Myra. I think that three-pronged approach is a perfect place to leave our conversation for today. It leaves people with a lot of helpful advice to think about and learn more as you mentioned. So on that note, I invite everyone to check out CITI Program’s On Research Podcast, which is hosted by my colleague Darren Gaddis. In an upcoming episode, Darren and Myra are going to dive further into the implications of bots in survey research. In that conversation, they’re going to talk about the ethical implications of bots in survey research, the positives and negatives of bots in survey research, and more. You can subscribe to On Research wherever you listen to your podcasts. So that is all for today and I look forward to bringing you all more conversations on all things tech ethics soon.

 


How to Listen and Subscribe to the Podcast

You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guest

Team Member myra luna-lucero

Myra Luna-Lucero, EdD – Columbia University

Dr. Myra Luna-Lucero is the Research Compliance Director at Teachers College, Columbia University. In addition to supporting researchers, she has recently launched an ethics internship program and an extensive transformation of the College’s IRB website. She regularly offers seminars and workshops on research compliance and IRB leadership.

 


Meet the Host

Team Member Daniel Smith

Daniel Smith, Associate Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program

As Associate Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.