Season 3 – Episode 5 – Engaging with Online Communities: Ethical Considerations for Researchers
This episode discusses how to responsibly study digital communities without violating their trust.
Podcast Chapters
Click to expand/collapse
To easily navigate through our podcast, simply click on the ☰ icon on the player. This will take you straight to the chapter timestamps, allowing you to jump to specific segments and enjoy the parts you’re most interested in.
- Podcast Overview (00:00:30) Introduction of the podcast’s purpose and the ethical controversy surrounding Reddit’s CMV subreddit.
- Dr. Sarah Gilbert’s Background (00:02:27) Dr. Gilbert shares her experiences and interest in online research ethics.
- Recap of the CMV Situation (00:05:17) Summary of the CMV subreddit ethics scandal and its implications.
- Ethical Responsibilities in Research (00:06:27) Discussion on the ethical responsibilities researchers have when studying online communities.
- Informed Consent Challenges (00:08:44) Exploration of informed consent in anonymous online spaces like Reddit.
- Disclosure in Deceptive Studies (00:10:19) Discussion on the ethics of disclosure in deception studies and participant opt-out.
- Ethical Justifications for Concealing Identity (00:11:37) When concealing a researcher’s identity is ethically permissible for safety.
- Impact on Community Trust (00:14:07) How research interventions can undermine trust in online communities.
- Balancing Rigor and Impact (00:16:31) Discussion on maintaining research integrity without affecting community dynamics.
- Disclosure After Research (00:20:50) The importance of informing communities about their involvement in research.
- Understanding Community Nuance (00:21:35) Discusses the importance of understanding community rules and dynamics for effective and ethical research.
- Platform’s Role in Research Protections (00:25:51) Explores how moderators can protect community members and improve research quality through collaboration.
- IRB Limitations in Online Research (00:29:53) Examines the challenges faced by Institutional Review Boards in evaluating internet-based research.
- Resources for Ethical Research (00:33:27) Recommends tools and resources to aid researchers in conducting ethical studies within online communities.
- Final Thoughts on Research Ethics (00:36:44) Concludes with reflections on the complexities of ethical research in public online forums.
Episode Transcript
Click to expand/collapse
Dr. Sarah Gilbert: Well, this is probably not going to be all that popular, but I think sometimes we just need to think about what we need to opt out of a research project. Sometimes if it’s going to compromise the rigor, and we can’t do something scientifically, and also ethically right now with consent, then maybe we shouldn’t be doing it. Maybe we should just say, “No.”
Alexa McClellan: Hello, and welcome to On Research with CITI program, where we discuss issues that impact scientific research, and research compliance. I’m your host, Alexa McClellan. Before we get started, I want to quickly note that this podcast is for educational purposes only. It is not designed to provide legal advice, or legal guidance. We should consult with your organization’s attorneys if you have questions, or concerns about the relevant laws, and regulations that may be discussed in this podcast. In addition, the views expressed in this podcast are solely those of our guests. In early 2025, Reddit’s Change My View subreddit known for thoughtful debate was quietly infiltrated, not by trolls, or spammers, but by AI bots posing as real users. Behind it? A research team from the University of Zurich. Over 1,700 comments, zero disclosure, no consent, and now an ethics firestorm. What happens when academic curiosity collides with community trust?
Today we’re diving into a heated, and timely topic. The ethics of internet research, particularly in light of the recent controversy on Reddit’s Change My View subreddit. So, how do we responsibly study digital communities without violating their trust, and where do we draw the line between research, and interference? Joining us today is Dr. Sarah Gilbert, a research associate at Cornell University, and research director of the Citizens and Technology Lab, where her work focuses on supporting healthy online communities, including how online data can be reused ethically. Together we’ll unpack the CMV case, explore how current research norms are being tested in real time, and ask what a more transparent future might look like. Dr. Gilbert, thank you so much for talking with me today.
Dr. Sarah Gilbert: Thank you for having me.
Alexa McClellan: So, I’m really interested to get your take on the current controversy with the CMV subreddit, but before we get to that, can you tell us a little bit about your background, and what got you interested in online research ethics?
Dr. Sarah Gilbert: Yeah, so it was really my experience doing research with online communities starting from when I was doing my PhD, I was actually doing a couple of case studies with online communities, one on Twitter, and one on Reddit, and both of them had really strong views, or really nuanced views about how research ethics should be done. So, the first community on Twitter was actually one dedicated to discussing healthcare in Canada called HCS-MCA. And so, one of the things that I learned from talking to members of that community who had been patients in a lot of research studies, so medical studies, and they talked a lot about this idea coming from disability studies, nothing about us without us because in this case people had gone through these research studies about rare diseases, and had their big hopes, right? It’s like, “Wow, I’m sick, and participating in this. This might actually make me healthy, and if it doesn’t make me healthy, maybe it’ll help other people.”
But they weren’t hearing anything from the doctors, and the physicians, and the research teams that they were working with. And so sometimes they would have no idea if what they had been given was a placebo. Sometimes they would have no idea if the people were even thinking about them. And so there was this sort of idea, this again, coming from disability studies, nothing about us without us. That was really zeroed in to me this idea about informed consent and that we should be doing this in online communities as well, and thinking about how we can take the lessons learned, and the advocacy from these patient advocates, and translate that into online research. And then another one, it was actually a Reddit community called Ask Historians that I still work with was learning more about indigenous methods, and working more ethically with indigenous communities. One of the moderators, Kyle Pittman had actually put together these amazing resources for working with indigenous communities as he is a descendant of the Nez Perce, and Yakima tribes, and so he thinks really deeply about this.
And so again, also thinking about how can I be creatively working with online communities, and drawing from all of these important lessons that we already know, we already have information about this, we just need to apply it to these different contexts. So, that led me into a couple of different questions, how people feel when their social media data is used for research, and then also how we as researchers can be more inclusive, and more collaborative, especially when doing these kind of large scale quantitative projects that make traditional ethics practices really difficult.
Alexa McClellan: That’s fascinating. I’m really excited to get your perspective on what’s been going on currently, specifically with the CMV subreddit situation that come to light recently. So, for those listeners who may not be familiar, I’m just going to recap the situation. In early 2025, Reddit’s Change My View subreddit became the center of an ethics scandal after researchers from the University of Zurich conducted a covert experiment using AI generated personas. Over several months, these bots made over 1,700 comments posing as real users complete with fake personal details to test how effectively AI could change people’s opinions. The researchers didn’t inform subreddit moderators, or users violating both CMV’s rules, and basic ethical standards around consent, and transparency.
Reddit condemned the experiment as improper, and highly unethical, and is pursuing legal action. Meanwhile, the University of Zurich issued a warning to the lead researcher, but defended the value of the findings. The incident has ignited widespread debate over how AI is used in research, and the urgent need for better oversight when studying, or engaging with online communities. So, given that background, can you tell us a bit about why this is a big deal? What ethical responsibility do researchers have when engaging with, or studying online communities, and how is this violated?
Dr. Sarah Gilbert: Yeah, so I think a lot of this, there’s a couple of elements here that created, I think this kind of perfect storm of a controversy that really I think plays at people’s emotions in a way that makes a lot of this feel very violating. And one of that is the element of AI. It’s something that is a relatively new technology, something that we are all very familiar with, but is also kind of unknown. And then there’s the element of persuasiveness, like being manipulated, the fact that you’re sort of part of this study where the goal, or the aim is to persuade you of something that you might not have necessarily thought, and all without your knowledge, and your permission, that feels very violating. I didn’t sign up for this, I don’t want to be part of this. So, there’s that, it’s a very personal thing right down into your core, what do you believe?
And this is changing it. And then I think also there’s the community itself. It’s where people are going specifically to engage in this public, not necessarily debate, but where people are going specifically to be open, and to have their minds change, and to have their assumptions questioned. So, people are going to it, I think a little bit vulnerable, and a little bit open. And then to have that undermined through these bots, or AI that feels very violating. So, there’s a couple of different elements there that I think make this scary. But it’s not something that I would want. And I think also that the researchers highlight that this is a real problem. We are worried about this happening in real life. I understand why they would want to study it, because there is this real threat that bad actors all going to be doing this, and that we’re not going to know, and that we could be manipulated in this way. And so I think it’s playing on sort of a real life’s fear as well, except in a slightly different context.
Alexa McClellan: Yeah. And that brings us to a foundational issue which is consent. So, in online spaces like Reddit, what does informed consent really look like when users are often anonymous, and the lines between public, and private are blurry?
Dr. Sarah Gilbert: There’s a couple of things that we can be doing to think more creatively about how to get informed consent. In some of these research studies you’d mentioned in the description of this particular study that the researchers did not go to the moderators of Change My View. And that to me is really the first step. If you are doing research that is going to interfere with the community at all. So, by posting, by engaging, we think of this traditionally as more like human subjects research. We are asking them questions, we are surveying them. This is not actually that different. We’re still doing human subjects research by making this post in this community where there are humans. And so getting that kind of community consent from the moderator is one first step that you can do that the researchers in this case did not.
And I think we also need to think about when it’s going to be important to get that individual level consent. In this case, even if the moderators had consented, I think it’s risky enough you’re interfering, or engaging with individuals directly enough that they need to be notified, too. And if you can send messages by a bot to individual users by the thousands, there should be a way for you to reach those individuals to do more traditional disclosure type of thing, and let them know afterwards. I think it would be very challenging to do that beforehand. We have these norms, and practices set up for deception studies that I think it is possible to replicate. I do think that we need to be thinking carefully about whether, or not the disclosure afterwards during these deceptive studies is actually effective, especially if it involves something like manipulation when you’re outside of a lab, when you’re not already primed to think that I’m part of this research study.
I think the way we go about disclosing things, we need to do more research on that to make sure that the disclosure itself is also ethical. But I think that’s something that’s really important. And then also giving the people a chance to opt out if they don’t want to participate in the study, which you can in a traditional deception study afterwards, you always give them the consent after they can say, “Nope, I don’t want my data to be used anymore.” That’s part of the process, and none of that happened here. And I think that’s something that we need to be doing. If we have the capability of reaching out to people, manipulating people, we also have the capability of letting them know after.
Alexa McClellan: Yeah. I hope we can talk a little bit more about deception studies, and how that is normally handled because some researchers would argue that revealing their identity upfront could bias the results. Can you talk a little bit about if there’s ever a case where concealing one’s identity is ethically permissible?
Dr. Sarah Gilbert: Yeah, I think it actually is. So, there are cases where the identity of a particular researcher, and the topics that they are studying could actually put them in danger, put them at risk. Think for example, if you are a trans researcher studying transphobia in a community, if they find out, or if you reach out to them, they are likely to say no, not because of the science, not because of anything that you’re doing, but simply because of who you are. Or it might also result in a whole bunch of doxing online harassment, or abuse. There’s been a lot of studies about the experiences of researchers engaged in risky research. So, communities that may be hostile to research that may be hostile to particular groups of people. And when you are yourself a member of a marginalized community, if you are kind of studying up, that could actually result in a lot of harassment, abuse, targeted specifically because of who you are, and what you’ve chosen to research.
And I don’t think that that means that we should not be doing risky research, or that people who are marginalized should not be engaging in particular types of research just because it’s more dangerous for them to reveal who they are when doing this research than it would be for somebody else, for example. And so in those kinds of cases, I think it’s totally justified to not necessarily alert the community, or alert the individuals that you’re studying if that research puts you at risk. There’s a power dynamic there that we should be considering when requiring things like consent, and disclosure that it’s not going to be the same experience for everybody, and it’s not going to be the same experience in every context, and we need to account for that.
Alexa McClellan: Yeah, that’s interesting. I’ve always heard of that dynamic where the participant is the one who’s the most vulnerable, not the researcher. So, I think that’s really interesting how you flip that, and said that there are situations when we might need to be concerned for the researcher’s safety, too, as the person with a lower power in that. Yeah. So, thinking about the impact on communities, subreddits like CMV thrive on mutual trust, and good faith discussion, how do research interventions even well-intentioned ones, risk undermining that?
Dr. Sarah Gilbert: Yeah, I think this really goes back to understanding the goals of a particular community, and thinking through what your research is doing in relationship to that goal. So, is the research that you are doing going to help that community advance its goal, or could it potentially undermine it? And in this case, people are reaching out to other people to have their minds changed about something, and introducing something artificial that can undermine people’s trust that who they are engaging with when they go to this particular community that they’re not actually engaging with the humans that they thought that they were. And it might actually discourage people from both coming back to the community but also from contributing as a human. It takes a long time to formulate a good argument, particularly, if you want to provide some sources to be more convincing to work with the person, and their understandings to empathize with them. All the things that might help somebody change their view, that takes time when a human does it.
And why are you going to do that if you think that a bot that can just do something similar instantly is going to do that? So, in a sense, the bots can crowd out the human participants on the other side, too. So, there’s sort of the trust on both ends there that can get undermined both of the people who are going to have their views changed, or the people who are going to learn something new. But if you’re giving people bad advice, or you are preventing people from getting the advice that they need, there’s this other kind of layer of harm there, too. And that’s always going to change no matter what community you’re researching, and no matter how you’re doing it, the intervention is always going to interfere with it. And so what you really need to do is think about when what you’re doing is going to be in support of the community’s goals, and when it might risk undermining it, and take that into consideration when weighing whether, or not this is a good thing to do, or a good thing to study.
Alexa McClellan: And I think that disruption really leads us into the issue of balance rigor versus impact. How can researchers maintain integrity without unintentionally steering, or distorting the conversations of their study to get good data in the end?
Dr. Sarah Gilbert: Well, this is probably not going to be all that popular, but I think sometimes we just need to think about what we need to opt out of a research project. Sometimes if it’s going to compromise the rigor, and we can’t do something scientifically, and also ethically right now with consent, then maybe we shouldn’t be doing it. Maybe we should just say, “No”. Maybe the science is not worth it in this case, or maybe we can learn incrementally from a lab experience, or experiment what we need to know in a field experiment. That was one of the more sort of streaking things that with the CMV arguments was that there have been any studies in a lab environment, lots of them that have found that the LLMs are, the large language models in the AI are very persuasive.
We know this, we know this. So, what does a field experiment add? People kept saying, “We need ecological validity, we need ecological validity”, and there might be an answer to it. It’s not one that I have seen articulated in a way that’s particularly convincing, especially given the form that it can do. Is this worth it? Is what we already know about the persuasiveness of science enough? And so I think those are these kinds of questions that we should be asking is maybe we shouldn’t just study if we can’t do it ethically, or if we can’t do it in a way that’s not disruptive.
Alexa McClellan: Yeah. That’s something that when I was thinking about this interview, and I was talking to some of my colleagues, the reaction was like, “Well, yeah, we know that AI is out there influencing our everyday decisions all the time, so what makes this different? It already happens.” So, at the Ritz, we all kind of know is out there anyway, so this research is just the same thing. I mean, I don’t know. I think you made a good point. We already know it’s harmful, so why are we doing it?
Dr. Sarah Gilbert: Yeah. And I mean I’m open to the fact maybe there’s something, but weighing what that something is with what it did to the individuals in that community, the moderation team that has had to deal with an enormous amount of labor, because of the fallout of that, how that’s affected their community, the health of the community long term. Are they going to be able to come back from this? Are they going to be able to bounce back? That is a question that I would be very worried about if I was a moderator of Change My View, and also not to mention the groups of the people that the chatbot impersonated, right? People are coming away with a particular impression of these groups that they’re now taking with them because they don’t know if it was a real person, or not. So, they’re all these layers of harm, or as a, or potential harm, or risk of harm, and is whatever we learn from this is the ecological validity that integral to what we know that it’s worth risking all of these things?
Alexa McClellan: Not to mention the fact that it was against the board rules, and policies. So, they were intentionally going against the things that the community tried to mark as safe.
Dr. Sarah Gilbert: Yes. Yeah, there’s a reason why they have the rules in place. I’m a moderator of another similar subreddit. I did research with Ask Historians. I ended up continuing to study them further, becoming a moderator. We have very similar types of rules. We don’t ban AI specifically, it falls under the anti-plagiarism rule, but I can just imagine we would say no to a similar study if somebody had asked us, and just how worried I would be for the health of the community longterm, and all of that after. I really feel for the moderators who have spent a ton of time in developing their rules, and enforcing their levels, and in building a community where people can safely engage in civil discourse, and have their assumptions questioned only to have that just blown through, and undermined in this way. It’s heartbreaking.
Alexa McClellan: Let’s talk about disclosure. Should moderators, or even users be informed after the fact when their community is part of a research project, and does that compromise, again, the natural environment that researchers might want to study in the future?
Dr. Sarah Gilbert: Disclosure after the fact I don’t think would, and so when I do think that there is room for being creative in how we make people aware that either they, or their data, or their community has been used in a research study. In particular, I think if there are benefits that the community derives from participation, or from being included in a particular research study, we should definitely be sharing that back with them. That’s part of the principle of beneficence where if there is a benefit, it’s unethical to prevent communities, or individuals from getting access to that. So, there’s a lot of really good reasons to share the results back. There’s also a lot of really good reasons to work with communities from the onset. There’s a lot of specific nuances that online communities have, even if they seem similar to others, or on a popular topic, especially on Reddit where everybody can create a subreddit if they want.
You get a lot of repeats on specific topics, and usually a new community is created when the existing community doesn’t quite meet a particular need, or a niche, or something like that. And so you hit all of these communities that sort of spawn off another one. So, each one is a little bit different, and we need to account for those differences, and for that nuance, and the research, and some of it we can do by getting to know a community lurking, spending time there subscribing to it, understanding what the community is actually all about, but also through working with the moderator team themselves. I mentioned as a moderator of Ask Historians, we were actually the subject of not anywhere nearly as unethical study as this, but it was a group of researchers that had used Ask Historians data as well as a number of other history related subreddits to sort of map out using social network analysis, the different ways that conversations should happen about history on Reddit.
And Ask Historians has very strict rules, and operates very differently than everywhere else on Reddit, and there is a reason, and a rationale for that, and that’s the community’s public history mission, which if you are not a regular user on Ask Historians, you’re probably not going to really get that. We see people coming into the community all the time that don’t understand what the rules are, and why they’re in place, and why they’re so strict, and as a result, almost everything gets removed. The idea is to provide these in-depth comprehensive answers, which take time to write. So, everything else gets removed, know one-liners, go Google it, check on ChatGPT, which we get a lot now. That’s all removed. So, this study obviously that is visible when you do a social network analysis ended up characterizing the moderation as authoritarian, and a treatment that is not bad, but they didn’t account for the rules, and the rationale for it, and sort of cast it as this horribly authoritarian place that was not any good to learn about history.
One of the people who are there, the members, the rule, they’re all there for specific reason, and so not accounting for that, it’s not necessarily helpful, but it resulted in a study that I don’t think is particularly good. There’s another reason why you should be spending time in the communities, and talking to the moderators, and talking to people, because that actually is going to make your research a lot better if you understand why things are happening the way that they are, or the context behind all of this. So, in addition to just being more ethical, I think getting this consent in different ways from different people in different times, that makes you better research results flat out.
Alexa McClellan: Yeah. It reminds me of the stereotype of helicopter research where researchers would drop in someplace relatively unknown to them, do the research, take the results, and then just leave. And the community suffers because of that. They’re not getting the benefits, they’re not involved in the process. It’s not community-based research in any way.
Dr. Sarah Gilbert: Yes, exactly, exactly. And a lot of moderation. Teams are very willing to work with researchers, and some of them aren’t. And it’s like if you don’t want to be studied, or if you have certain parameters around how you are going to be studied, we shouldn’t be doing that. That should be respected. In most things, I think there’s a lot of really great reasons for talking to communities, and talking to moderators, and getting advice, and support for how to do your research better.
Alexa McClellan: I think that really leads into kind of the next topic that I wanted to talk about with you, which is oversight, and what role moderators, and community boards like Reddit have in trying to protect the members of the community. What could they have done differently? Is there anything they could have done differently?
Dr. Sarah Gilbert: So, I don’t know if I would necessarily say if anybody could have done anything differently, because you can have all of the policies that you want, which Change My View did. They had an anti-AI policy, and the researchers just kind of unilaterally decided that, “Oh, no, what we are doing is not in violation of those rules, or it’s not in violation of the spirit of those rules”,
Alexa McClellan: Or it’s more important.
Dr. Sarah Gilbert: …than those rules. The research is more important than…
Alexa McClellan: Right, exactly.
Dr. Sarah Gilbert: Yeah. So, one thing that Ask Historians has done actually in collaboration with some folks at the university Minnesota led by PhD student, Matthew Zhent, who actually helped us do some research about our community’s values, and then turn that into a research policy with us in collaboration with us. So, we actually have a research policy with all kinds of lays it out, all of these different sort of scenarios, and what we would want, and can you let us know afterwards, and best practices for engaging in research with us. And I know there are other communities that have done that as well. Indian Country also has a research policy. They’ve had one for a number of years that’s driven by a lot of indigenous perspectives on research, and trying to understand in a very particular context what the goals of the researchers are, why they want to use this community, especially given the context that indigenous populations have been so abused by researching researchers over the years. And so creating a policy to help them work with researchers so that it’s not extractive, or exploitative.
And so I think that there’s perhaps a little bit of room for Reddit to help moderators support that, and for IRBs to also support that, right? It would be really nice to know if this Ask Historians policy was violated, if I could take that to an IRB, and say, “Look, we have this public policy posted, and this research violated it. This means something. This actually matters.” Reddit could possibly do that. I am very hesitant to grant Reddit too much sort of gatekeeping power here, because sometimes research is going to be perhaps even antagonistic to Reddit, and it’s underlying goals, or it might reveal that Reddit is actually doing something harmful, and if Reddit has this gatekeeping power, it could actually be abused to really launch research that’s holding the platform accountable for bad things happening. And so there’s this balance there between I think the power of the platform, and the independence of researchers, and the support of the community that’s really challenging.
Although I will say to Reddit’s credit, they are stepping up, and supporting, as far as I understand the Change My View community, which I think is a good thing in this particular case. There could be a Reddit-wide policy by researchers, and communities, regular Redditors, moderators. That’s something that I think would be really powerful to have a statement out there is, again, if you have these policies that people can take to the existing structures, at least within the United States, I think that could be a really powerful thing as long as they’re also recognized by IRBs as something that is important, that is legitimate, and that should be valued, and abided by.
Alexa McClellan: Yeah, you bring up IRBs, and just for those who may not know, an Institutional review board in this country is tasked with reviewing all human subjects research. So, this research was actually reviewed by an IRB through the University of Zurich, and from what I understand, they suggested minor changes to the methods, and stated that the researchers should follow all of the board rules. Are current institutional review boards equipped to evaluate the nuances of internet-based research, especially when dealing with gray areas like this case?
Dr. Sarah Gilbert: So, that’s tough because I think the institutional review boards play different roles all throughout the world, and in many places it’s almost more of a compliance type of thing, or they’re really limited in terms of the harms, or the risks that they will actually look at, which could end up missing the major benefit even. Like if all you are doing is focusing on potential risks, and benefits to individual human, human subject research, you might miss out on the benefits, and harms to groups of people, or the whole online community, or even society more broadly. And whether, or not that’s a question for IRBs to review, I think is a huge thing, it’s probably beyond what we could probably talk about in a relatively short podcast, but I also think that there’s a lot of responsibility that the researchers should be taking on themselves. You cannot rely on IRB in order for your research to be ethical, because they don’t have the time.
An IRB has to be experts in all of the research that everybody is doing involving humans throughout an entire university, or institution. And how is the Zurich IRB like they are not going to spend a whole lot of time on Change My View, and understand all the nuance, and read all the rules, and make this determination based on the specific context in the specific space. I think that there are some sort of high level things that they could take a look at, but a lot of work that’s going to be really ethical is going to be so contextual, and it’s going to require so much detail that there is a risk that we also don’t want important research to get held up to. So, I think that there’s a lot of work that individual researchers need to do to make sure that they are doing ethical research, and doing right by the communities that they are studying, and then the IRB afterwards that they can provide a place of recourse if a community does come, and report something as violating that they are prepared to respond.
Because that’s another, I think, failure of this story is that the Change My View moderators did approach the IRB who did tell them that “No, it was reviewed, and it’s fine, and we determined that the risk was minimal. So, you’re fine.” And I think that’s right there as an advice for an IRB is that if a community comes to you, and reports a study, and it says, “No, we were wronged”, actually listening to them a really important first step.
Alexa McClellan: Yeah. I thought it was interesting. I wanted to just bring it up quick. You talk about earlier why was the study even necessary? And I think that speaks to scientific merit, and I think a lot of IRBs do review scientific merit of a study at all. It’s purely about protection of the participants in the study. So, that’s something that might get overlooked, especially in smaller universities that might not have another review board that it goes through.
Dr. Sarah Gilbert: Yeah. Well, an IRB should not be evaluating the scientific merit of a study, which is why I think it’s really important for researchers to be weighing those considerations. What is the scientific merit? What does the community stand to benefit at what cost, and should I be moving forward with it?
Alexa McClellan: Yeah. So, in conclusion, for researchers who might be interested in doing more research with online communities, and for the IRBs who are reviewing these studies, do you have any advice, or resources that you could share?
Dr. Sarah Gilbert: Yeah. Well, I have a whole bunch of papers I could probably recommend, and there’s also the [inaudible 00:33:33] research tool. It’s a really, really great decision support tool that takes you through a whole bunch of different scenarios, and provides guidance for what you should do in a bunch of different types of scenarios depending on the kind of data that you’re working with, the context, all of that kind of thing to provide really helpful advice for people that maybe don’t necessarily know what they should be doing, or what the best practices are. And there’s all kinds of really helpful resources, and things like that too. My lab also has a software tool called Bartleby that allows people doing research in online communities, like large scale online communities actually reach out to individuals. It’s an afterwards kind of thing. Basically, it discloses the study that their data was used in research, and allows them the opportunity to remove their data from the study if they want.
So, if there are tools out there that can help you do that, I wouldn’t necessarily recommend using that tool for every single study all the time because it would get really spammy. People are going to get very annoyed if a very low risk study that uses one teeny tiny piece of their data for something that it’s like, “Why are you even messaging me?” Reddit, because it’s got the open API still being one of the few places that has it because of the topic-based communities. It’s a really valuable source for research, and research data, so it is used by thousands of people, hundreds every year of research teams. So, providing that kind of disclosure every single time is likely to actually be harmful, and cause disruption.
But for high risk studies like this one, something like that would be pretty important to let people know afterwards. And also my colleagues, and I, led by Casey Fiesler actually have a study where we reviewed people’s ethical approaches to doing research on Reddit, and so we were able to gather a whole bunch of best practices, and provide recommendations for Reddit research.
Alexa McClellan: Oh, great.
Dr. Sarah Gilbert: So, that’s another resource that I would recommend people take a look at if you’re doing Reddit research since we’ve taken a whole bunch of studies learned from the best, and put them in one for you.
Alexa McClellan: That’s wonderful. That’ll be so useful. Dr. Sarah Gilbert, thank you so much for your time. It’s been fascinating talking to you.
Dr. Sarah Gilbert: Well, thank you for having me, and I’m really glad that we are having this conversation. Part of it is that Reddit doesn’t always necessarily get taken very seriously, even though it is this hugely popular site for research. The fact that it was on Reddit as opposed to something like on Facebook, or Twitter, or one of the more, I suppose, mainstream platforms, people might not necessarily hear about it even though it’s an incredibly egregious case because of all of the reasons that we mentioned at the beginning. And so I think it’s really important to have this kind of discussion, especially with people who are thinking critically about these kinds of things, and making these kinds of evaluations about whether, or not they should be conducting research like this.
Alexa McClellan: Absolutely. Thank you again. That brings us to the end of today’s episode of On Research. A big thank you to Dr. Sarah Gilbert for unpacking the complex, and often uncomfortable ethics of internet research. As we’ve heard, public forums may be open, but that doesn’t mean that they’re fair game for unchecked study, or influence. Whether it’s informed consent, transparency, or community impact, the standards we apply offline are increasingly being tested in online spaces. If today’s conversation made you pause, question, or even change your view, then this episode did its job. Thanks for joining us today. If you enjoyed this episode, be sure to subscribe, share it with the colleague, and stay tuned for more conversations that celebrate scientific research, and the people who keep research ethical, responsible, and impactful. I also invite everyone to visit CITIProgram.org to learn more about our courses, webinars, and other podcasts. Cynthia Belas is our guest experience producer, and Evelyn Fornell is our line producer. Production and distribution support provided by Raymond Longaray and Megan Stuart. Thanks for listening.
How to Listen and Subscribe to the Podcast
You can find On Research with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2112707.rss” into your your podcast apps.
Recent Episodes
- Season 3 – Episode 4: CITI Program Turns 25: A Celebration of Commitment to Research Integrity
- Season 3 – Episode 3: Community Engagement in Research
- Season 3 – Episode 2: Dual Use Research of Concern Policy
- Season 3 – Episode 1: Coverage Analysis in Clinical Research
Meet the Guest
Sarah Gilbert, PhD, MLIS, BA – Cornell University Department of Communication
Dr. Sarah Gilbert (she/her/hers) is a research associate at Cornell University and Research Director of the Citizens and Technology Lab where her work focuses on supporting healthy online communities, including how online data can be reused ethically.
Meet the Host
Alexa McClellan, MA, Host, On Research Podcast – CITI Prorgam
Alexa McClellan is the host of CITI Program’s On Research Podcast. She is the Associate Director of Research Foundations at CITI Program. Alexa focuses on developing content related to academic and clinical research compliance, including human subjects research, animal care and use, responsible conduct of research, and conflict of interests. She has over 17 years of experience working in research administration in higher education.