Back To Blog

On Research Podcast – Artificial Intelligence in Research

Season 1 – Episode 3 – Artificial Intelligence in Research

Artificial intelligence, or AI, is utilized more and more frequently in different settings, including research. AI can be utilized within research by optimizing resources, data, and technology to synthesize and analyze data or problems. With AI in research, so much is still unknown. How do you inform participants of informed consent and protect their data when utilizing AI? How do researchers account for bias or deception while utilizing AI in a research setting? Chirag Shah, Ph.D., discusses these issues and much more on this episode of On Research.

 


Episode Transcript

Click to expand/collapse

 

Darren: From CITI Program, I’m Darren Gaddis and this is On Research. Today, what is artificial intelligence or AI? How AI is applied in both qualitative and quantitative research and ethical considerations for researchers utilizing AI technology. I spoke with Chirag Shah, Professor of Information in Computer Science at the University of Washington. He is the Founding Director for Info Seeking Lab and Founding Co-Director of Center for Responsibility in AI Systems and Experiences. He works on intelligent information access systems with focus on fairness and transparency. As a reminder, this podcast is for educational purposes only, it is not intended to provide legal advice or guidance. You should consult with your organization’s attorneys if you have questions or concerns about relevant laws and regulations discussed in this podcast. Additionally, the views expressed in this podcast are solely those at the presenter and do not represent the views of their employer. Hi Chirag, thank you for joining me today.

Chirag: Glad to be here.

Darren: To get us started today and to help ground this conversation. What is AI or artificial intelligence?

Chirag: Good question. And of course, these things are kind of getting confused with one another. So traditionally, AI is really where we imagine artificial systems mimicking human behavior in task. And so, a lot of the sci-fi movies or stories that we are familiar with, imagine some kind of autonomous system, typically a robot that would act like humans. And so, that’s sort of the vision for a typical AI.

But then of course, you start questioning, “Well, just because something act intelligent doesn’t mean that it actually is intelligent, does it have intelligence?” And so, it turns out, a lot of the things that we want to use these systems for, it doesn’t matter if they’re actually intelligence or acting intelligence as long as they can get the task done. So, machine learning is really about training machines to do human tasks, whether they look like humans or behave like humans, it doesn’t matter, as long as they can actually improve with more training and being able to do the kind of tasks that typically humans will do.

So, machine learning is a soft field of AI, and depending on where you focus, often AI essentially ends up being machine learning. So today of course, the predominant application of AI or the dominance of AI that you see are really, truly through machine learning applications. But there are still some parts of AI that are outside of machine learning, but usually they don’t get a lot of attention.

Darren: And Chirag, with this understanding of artificial intelligence or AI, how is it currently applied to research settings for both qualitative and quantitative research?

Chirag: Typically these three categories that we recognize. There is the artificial narrow intelligence. This is where you see things like chess playing AI, for instance. So that AI is really trained to be a chess player. It’s not going to do your household chores or it’s not going to drive your car, but it has learned to be a really great chess player. So when you have an AI system that’s specifically designed for a tasks like that, so that’s what’s called artificial narrow intelligence, as opposed to artificial general intelligence. And that’s that sci-fi vision of a robot that can play chess, do your grocery shopping, play badminton with you and do all those things. It has this general intelligence like humans. Humans, we have general intelligence. We are not just meant to do or trained to do just one kind of task, but we can do all kinds of intelligence task. So, artificial general intelligence refers to that.

One of the reasons this distinction is important for people to know, is often we see a system that is really doing artificial narrow intelligence, but we extrapolate it to general intelligence. And this kind of happened, if you may recall a year or so ago, with somebody at Google thinking that their AI agent that was a conversational agent for retrieving information had become sentient. And so, this is a common misperception where something was designed to do some specific task is perceived to actually have intelligence, like a general intelligence like humans who can feel and perceive things and so on. So, it’s very important to understand that most things that we see today are really under artificial narrow intelligence and not general intelligence. And then there’s a third category, which is really sci-fi thing, which is artificial super intelligence. And that’s where you see Terminator kind of robots sticking over the world. And thankfully we’re not there yet.

Darren: And what are some benefits to using AI technology in a research setting?

Chirag: It depends on your notion of AI, what AI is and what it does. These days, so many things that we work with, in terms of our tools and our techniques, come from machine learning. And since machine learning falls under AI, you can say, “Well, that’s really from AI.” So examples are, where we’re working with large data sets, building large models for either national language applications, studies with text processing, vision, recommendation. A lot of these things are coming from AI or machine learning and they’re impacting, both the quantitative and qualitative.

Quantitative is easy to understand because, obviously when you have a ton of data, ton of text, or images that you’re processing, something that will take a long time or a lot of effort from a manual work and being able to use some of these AI tools that will do things like extracting relevant passages, detecting patterns. A lot of these things, so that’s definitely applying ML or AI techniques to quantitative methods.

But we also see it in qualitative methods. A lot of the qualitative methods would involve collecting, say survey responses or interview data. There are tools that allow one to go through this automatically to extract concepts. One of my collaborators had this tool called Tech Sifter, and there are many other tools like that one could use to analyze, say social media data, things that would traditionally, for a qualitative researcher will take a long time to tag through to extract the concepts and do the grounded theater or some other approaches. And now being able to run through these kind of tools that cuts down the research effort considerably. So, there are lots of places where these methods and tools are being used for supporting both quantitative and qualitative research.

Darren: What are some ethical considerations to using AI technology in a research setting?

Chirag: At least it’s a helping hand. For a lot of the things, whether you’re doing quantitative research, qualitative research, whether your research involves human subjects or working with log data, invariably there are tools that could help you, in both cutting your costs and effort and in some cases really being able to identify things, patterns and concepts much more effectively. And so, I think currently, it’s really important for researchers to be, at least aware of some of these technologies. I mentioned a couple of things where you can use these tools to extract some concepts. I’ve personally been involved in some projects where I joined the project and people didn’t know that you didn’t have to do this manually, that you could actually take advantage of some of these functionalities built, often built in larger tools like Power BI or Qualtrics or SurveyMonkey. Other times there are plugins or separate tools that you can run through. So, not knowing that, essentially they were wasting a lot of resources, a lot of time and slowed down in the research process.

So, first it’s important that all kinds of researchers, even if they’re qualitative researchers, because often they think that these tools cannot really help them, this is not for them, it’s only for quantitative. I think that’s actually a misnomer. These tools can really help with any kind of research I’ve seen, even ethnographic work I’ve seen. So, it’s very important that researchers actually think through this, learn about this and then really start applying them as more of a helping hand. They’re not going to replace… Often there is this concern that these things are just going to replace. I don’t think that’s happening anytime soon. They are really helping hand, in fact, I would even warn against overusing them. It’s important that as a researcher we take responsibility for what we put out there and that involves, that means even if you have a fantastic tool that actually does all this, say text processing or image processing for you, there is still responsibility that we have to actually validate some of these things and not just let it run on autopilot.

Two messages. One, learn about these tools, know that they exist so that when the time comes you know where to look it up. And two, don’t overuse it, because you are still responsible for the research you’re putting out. As I was saying before, it’s important when you’re using AI technology, you be mindful about how you’re using it. So typically, the ideal scenario is, you’re using it to help you do some of the chores that would otherwise take up too much effort from the, manual effort that maybe is not the best use of the researcher’s time.

So for instance, you may be using some AI tool for screening purposes. So you’re trying to figure out who are the right participants for your study. You may be using these to detect some noise and weed it out. All kinds of things that you can use at different stages of your research, you can apply this to. It’s important that these things are well understood and documented, because many of these tools are known to have biases. So for instance, if you are using some tool for either screening or recruiting, we have known that in many situations they have certain biases towards or against certain gender or race or ethnicity. Solely using these tools without understanding these things, not only it’s a bad way to do the research, but there are some ethical implications here. Because again, just because you’re using some tool, that doesn’t mean you don’t bear the responsibility for the decisions that that tool makes and you end up implementing it.

We have to understand that when we are using different AI technology for making decisions, building models, that there are biases in our data sets and some of these techniques are more prone to the biases stemming from the way we collect the data, the way we present the data than others. So again, it’s our responsibility to understand better and see which algorithms or techniques we’re using and how they’re prone to picking up this biases.

The simple example is, many classifiers, if you are using them to, and if you’re evaluating them based on their overall accuracy, they may have bias against minority classes because they could achieve higher overall accuracy by doing really great for large classes and doing really bad for the small classes. And so, if you’re not careful about this, this could end up creating an outcome that’s going to be clearly unfair, inequitable, and then you’re responsible for it. So I think, before applying these technology for any part of the research process, it’s important that the researcher understands their limitations and then have some process in place to mitigate some of these challenges, some of these problems. So, I’m not saying that we shouldn’t use this technology, but be aware of their limitations and be prepared to act for mitigating these issues.

Darren: Are there special informed considerations when using AI within research?

Chirag: When we are doing informed consent, of course, ethically it’s important that we are providing full transparency to our participants. Anyone that we are recruiting, our stakeholders. It gets a little tricky with AI systems. I mean, it depends how you use it. For instance, an AI system is used for deciding who gets recruited or who gets paid how much, or if somebody’s response is correct or not or expected or not. That’s a different thing than a human being making those decisions. So, when a human being makes a decision, we are able to document that and put that out. So we can say, “Here are our inclusion criteria, here are our exclusion criteria. Here’s the reason why…” So for instance, when we do some studies with crowdsourcing service like Mechanical Turk and we decide not to pay somebody after they have done the task, we have a reason for that, because here are the criteria we had for getting the response and you haven’t met those, that’s why we are rejecting.

When you’re using AI technology, often these things are not as clear, because often these things could be black boxes. So, you have a classifier of some kind that takes all this input from the users and then spits out some decision that tells you whether they should be recruited or not, whether they should be paid or not, or how much they should be paid. And so, it’s important that you inform this to those participants before they agree to something like this, because they will be impacted by the decision that your AI technology makes and not just you. If they have a question, they’re going to ask you and you will be responsible for explaining to them on behalf of that AI system that you use.

The other thing that I’ve seen happening, is where people, things like Wizard of Oz method. Where you’re pretending that there is this amazing AI system that the user or the participant is using, but essentially it’s just you behind the scene making things up and giving these response. I feel, ethically at least, that we have some responsibility to debrief the participant afterwards. So obviously, for the design purposes, I can’t tell them beforehand that you’re going to be using this fake AI system because that will defeat the purpose of doing the study. Sure, it’s not part of my usual informed consent for when signing them up, but then I would tell them afterwards. And then this is also true, other forms of deception that some methods require that maybe you can include them for the purpose of the method in the informed consent beforehand, but you want to make sure that, obviously, this deception is not going to cause real any harm to the participant and that you tell them about that deception afterwards.

Finally, privacy is a big issue in all of these things. It’s always an issue, but when you are using AI systems that tend to be black boxy, that takes different kinds of data and has these internal processes that uses the data for making decisions. How do you convince, how do you inform your participant how you’re going to protect their information? That even though the system is going to anonymize the information and use it in aggregated form, the decision that comes out or whatever the outcome, it’s still not going to be identifiable for the participating user. So I think there are some extra considerations when using an AI system when it comes to informed consent. And primarily, this is stemming from a lot of this lack of transparency in AI systems and possible chances of bias or deception while using that. I think the users need to be aware of, at some point, in some cases you can’t do it beforehand, but you certainly should do it after.

Darren: What else should we know about AI in a research setting or any closing thoughts for us?

Chirag: Yeah. It’s an exciting and scary time at the same time to be here. I don’t know if this is the right analogy or not, but I find myself like on a Mobious trip, where you kind of keep going in circle and you’re on this side or the other side or there is no side, that there is just this continuum in a way. I feel like we are building these tools, this technology that shape our research and our thinking. And that research and that thinking is influencing the kind of tools we could build. So there’s this kind of circle going around, where we are building these things and being affected by it at the same time. And this circle is going to continue. And it’s only intensified recently.

The things that these tools, these AI technologies are able to do is certainly helped us open up some of the directions that have been hard. They’ve shown great strides in medical research, in a lot of this vision and national language domains. And as this happens, it changes what else we could do with it. So, there are problems that we’ve been stuck in for long a time, and now suddenly it’s no longer a problem. And it’s sort of like, we were not ready to even think about, “What if now that’s not a problem, what else would we do?” It’s just moving so fast and that changes what we can build next.

I think it’s a fascinating time and I’ll go back to my quote saying, it’s both exciting and scary at the same time. The kind of things we’re able to do, it’s fantastic, but it also, I know, generates this anxiety in many of us for different reasons, because it’s such a fast pace that I know many researchers, educators are not being able to keep up with it. And there is this fear of being obsolete even within a matter of months, not years or decades. But how we act, what we do now, also defines what will happen next.

I want to invite and not threaten, but actually invite everybody in working in different fields to help shape this AI. AI is no longer exclusive to computer science or information science or any of the specific field. AI, for better or worse is really gone mainstream in all the fields. All those fields and people working in it, they all have power and ability to, and I think responsibility to influence what AI could and should do. So, I hope everybody understands and appreciates that opportunity and not just take a backseat or be sidelined by what this technology is doing.

Darren: Chirag, thank you for joining me today.

Chirag: My pleasure, Darren.

Darren: Be sure to follow, like and subscribe to On Research with CITI Program to stay in the know. If you enjoyed this podcast, you may also be interested in CITI Program’s other podcasts, On Tech Ethics and On Campus. You can listen to all of CITI Program’s podcast on Apple Music, Spotify and other streaming services. I also invite you to review our content offerings regularly as we are continually adding new courses and webinars that may be of interest to you. All of our content is available to you anytime through individual and organizational subscriptions. You may also be interested in CITI Program’s AI and Higher Education: An Overview webinar. Please visit the CITI Program’s website to learn more about all of our offerings.

 


How to Listen and Subscribe to the Podcast

You can find On Research with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2112707.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guest

Chirag Shah, PhD – University of Washington

Chirag Shah is Professor of Information and Computer Science at University of Washington (UW) in Seattle. He is the Founding Director for InfoSeeking Lab and Founding Co-Director of Center for Responsibility in AI Systems & Experiences (RAISE). He works on intelligent information access systems with focus on fairness and transparency.

 


Meet the Host

Team Member darren gaddis

Darren Gaddis, Host, On Campus Podcast – CITI Program

He is the host of the CITI Program’s higher education podcast. Mr. Gaddis received his BA from University of North Florida, MA from The George Washington University, and is currently a doctoral student at Florida State University.