Back To Blog

On Tech Ethics Podcast – Fostering AI Literacy

Season 1 – Episode 31 – Fostering AI Literacy

Discusses the importance of fostering AI literacy in research and higher education.

 

Podcast Chapters

Click to expand/collapse

 

To easily navigate through our podcast, simply click on the ☰ icon on the player. This will take you straight to the chapter timestamps, allowing you to jump to specific segments and enjoy the parts you’re most interested in.

  1. Introduction of Guest Host (00:00:03) The host introduces Alexa McClellan and her background in research integrity.
  2. Introduction of Sarah Florini (00:00:52) Sarah Florini is introduced as an expert in technology ethics and AI literacy.
  3. Discussion on AI Literacy (00:03:30) Sarah defines AI literacy and its importance in understanding technology’s social and economic contexts.
  4. Current Use of AI in Research (00:05:20) Sarah outlines how AI is currently being utilized in research and higher education.
  5. Need for AI Literacy Across Fields (00:07:14) Discussion on the uniform need for AI literacy across different academic fields.
  6. Key Issues with AI Tools (00:09:41) Sarah highlights pitfalls of using AI tools, emphasizing the need for critical evaluation.
  7. Role of AI Literacy in Overcoming Issues (00:13:21) Exploration of how AI literacy can mitigate challenges associated with AI technologies.
  8. Starting Points for AI Literacy (00:15:47) Suggestions for educators and learners on how to begin increasing their AI literacy.
  9. Skepticism Towards AI Products (00:17:30) Advice on maintaining skepticism regarding claims made by AI companies about their products.
  10. Additional Resources for Learning (00:19:50) Sarah recommends resources for deeper understanding of AI and its implications.
  11. Final Thoughts on AI (00:22:53) Sarah emphasizes the importance of skepticism amidst the hype surrounding AI technologies.
  12. Thank You and Closing Remarks (00:24:44) Sarah expresses gratitude for the opportunity to participate in the podcast episode.
  13. Exciting News About New Host (00:24:46) Daniel announces Alexa as the new host of the “On Research” podcast, highlighting her upcoming conversations.
  14. Looking Forward to Future Conversations (00:24:46) Daniel concludes by expressing anticipation for future discussions on tech ethics.

 


Episode Transcript

Click to expand/collapse

 

Daniel Smith: Welcome to On Tech Ethics with CITI Program. I’m joined today by my colleague and guest host Alexa McClellan. Alexa, do you want to tell us a bit about yourself?

Alexa McClellan: Hi, Daniel. Thanks so much for the invitation to join you today. I am the Associate Director of Research Foundations here at CITI Program, where I develop and manage content in core areas such as human subjects research, animal care and use, responsible conductive research, and conflicts of interest reporting. I joined CITI Program in September, but my career in research integrity spans over 15 years in higher education, where I’ve worked to promote ethical and responsible research practices. I’m passionate about fostering integrity in research and supporting the individuals and institutions that uphold these vital standards.

Daniel Smith: Thanks, Alexa. So on that note, our guest today is Sarah Florini, who is an associate director and associate professor in the Lincoln Center for Applied Ethics at Arizona State University. Sarah’s work focuses on technology, social media, technology ethics, digital ethnography, and Black digital culture. Among other things, Sarah is dedicated to fostering critical AI literacy and ethical engagement with AI and machine learning technologies. She founded the AI and Ethics Workgroup to serve as a catalyst for critical conversations about the role of AI models in higher education. Today we are going to discuss the importance of fostering AI literacy in research and higher education.

Before we get started, I want to quickly note that this podcast is for educational purposes only. It’s not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws and regulations that may be discussed in this podcast. In addition, the views expressed in this podcast are solely those of our guests. On that note, welcome to the podcast, Sarah.

Sarah Florini: Hi. Thank you for having me. I’m excited to be here.

Daniel Smith: Yeah, it’s a pleasure to have you on. So just to get started, can you tell us more about yourselves and your work at Arizona State University?

Sarah Florini: Sure. I mean you covered a lot of it in that very lovely intro that you gave me, but my work falls into what I might broadly call critical technology studies. So I’m interested in the ways that technology interacts with society and culture. My work has focused on race heavily, but I also do tech ethics, which I come to via research ethics as an ethnographer. So I’ve done a lot of work around research ethics and translating ethnography into digital spaces.

And so, now that we’re in this moment where AI is looming large, I’ve transitioned into what seems like a pivot, but actually isn’t, because a lot of the issues that come up with AI, things about data, algorithms, platforms, ethics, privacy, all of that are pretty much in the purview of my larger discipline.

And so, once AI really became a major issue, particularly here at ASU, where we’re very on the cutting edge of exploring what AI can mean for universities, for higher education, for research, I really became heavily involved in trying to be part of those conversations.

Alexa McClellan: That’s great. Thank you so much, Sarah. I’m curious, how do you define AI literacy?

Sarah Florini: It’s a tricky question. So the first thing I will say is that often when people talk about literacy, they really are referring to skills and competencies. How do we use these things? What are the skills that we need? I would argue that literacy is a much broader concept, that we’d be well-served to thinking about it as understanding these technologies in their social, cultural, and economic contexts. I say economic because a lot of AI tools are products. They’re part of our economy.

That contextualization is really important for understanding what these tools are, what they aren’t, what they can do, what they can’t do. What are the benefits and the drawbacks? For example, there’s a lot of concern right now about how resource-intensive a lot of AI models are and the contributions to climate change.

But it’s a really tricky question because I should just start by saying that I believe that AI is a little bit of a meaningless term, which makes these conversations difficult. It’s more of a branding term. If you look at it even now, all of these different technologies, everything from generative AI like ChatGPT that’s making language, to AlphaFold, which is Google’s project looking at protein folding of human proteins, all of these things get lumped under the category of AI, and actually a lot of technologies that we used to call machine learning, that we used to call algorithms are now getting called AI. So AI literacy is tricky because when you say AI, a lot of people will think of a lot of different things.

Daniel Smith: So with that in mind, and given that the focus of our conversation today is AI literacy in research and higher education, can you just give us a quick overview of some of the main ways in which AI is currently being used in those areas?

Sarah Florini: Sure. I think I’m probably going to persist in this annoying insistence that AI is a difficult term. But when we’re thinking about research in higher education, again, it depends on what you mean by AI. So there is a long history of using AI, machine learning technologies to process large amounts of data. So researchers are definitely still pursuing that as new AI models come out.

We’re seeing with social sciences the rise of software AI models that will help you label and code. So, for example, social scientists often have a huge amount of data that is taken from whatever social situation that they are studying, and then they code it according to different themes, concepts, ideas, and that is hard to do at scale with just humans. So there are AI models that are allowing for that kind of coding to take place on much larger sets of data.

Then, of course, there’s generative AI, and the uses there are much less clear. In particular, large language models like ChatGPT are being explored at universities. It is quite controversial. People are experimenting with how do we use it in teaching? Can we use it for personalized tutoring? Can we teach students to use it as a writing aid? Some researchers are even using it and writing their own papers, their own research for publication.

All of this is quite controversial, and there’s a wide range of opinions. It is still very much field by field and discipline by discipline on what is considered acceptable, and it’s still very much being negotiated.

Daniel Smith: So do you feel that there’s a greater need for AI literacy in certain fields or among certain groups, or is it just across the board that AI literacy needs are the same?

Sarah Florini: I think that the AI literacy needs are the same depending on what kinds of tools you’re talking about. If we’re talking about generative AI, I think that those needs are pretty much the same, that people need to really understand that these tools have severe limitations.

One of the ones that people often speak about is something called hallucinations. I don’t love that term. I think it anthropomorphizes the technology. I tend to use the term fabrication, but these large language models are really predicting what is the most likely statement to be made in response to the prompt that you give them.

So they’re going for convincing, not necessarily accurate. And so, sometimes they will just make things up that are completely untrue. They are renowned for making up citations. And so, you see again and again that people will deploy these tools, and then there will be factually incorrect information, or papers that don’t exist will be put into the bibliography. I think that there’s a growing awareness of this, but I think that not everybody is necessarily aware.

I also think that there needs to be a little bit more clarity on the environmental impact. Large language models are large. It’s there in the title, and that right now, the approach to improving them is to continue to scale, to continue to make them bigger. More data, more computing power, which means more energy and means more water often to cool. So there are real concerns about the ways that these models might be contributing to climate change.

And so, I think that part of AI literacy is people asking themselves, what is the benefit and is it worth how resource-intensive these tools are? I think those are questions that apply to all fields.

Ed Butch: I hope you’re enjoying this episode of On Tech Ethics. If you’re interested in important and diverse topics, the latest trends in the ever-changing landscape of universities, join me, Ed Butch, for CITI Program’s original podcast, On Campus. New episodes released monthly. Now back to your episode.

Alexa McClellan: Sarah, you talked about the conversations and debates surrounding how to utilize AI as a tool. I’m curious what you think are some of the key issues or pitfalls that people should be aware of when they’re using these tools in their work or their studies.

Sarah Florini: Yeah, that’s a great question. I really hate to beat this dead horse, but I think one of the key pitfalls is that people need to realize that AI is not one thing and that this is important because a lot of AI tools are products, and they’re being made by companies who want you to use the product. And so, you see the imprecision of the language around AI, all of these different things being called AI, that kind of a janky chatbot, to refer to Google’s AlphaFold project, which has been quite successful and made really, really important contributions to science. Both of those things get called AI, and the sophistication and the contributions of models like AlphaFold get imputed to all things that are being labeled AI.

So people really need to understand that they need to ask questions about what they’re getting when somebody offers them an AI tool. I also think that we need to demystify a little bit and embrace the idea that maybe using AI is not going to give you a better result. It might be worse, particularly given the hallucinations of generative AI, but it also might not be that useful.

One of my favorite examples of this is DeepMind. There was a lot of reporting about a year ago that DeepMind created 380,000 new materials for material scientists to look at. It’s like, okay, 380,000 new materials. Now what do you do with a list of 380,000 new materials? Somebody has to go spend time going through that, figuring out what is useful. So have you really helped yourself by using this model?

Then, of course, there was reporting that said, “Oh, two dozen of these materials are completely novel, never heard of before, never known.” Then the chemists got involved and said, “Oh no, actually that’s not true.” And so, now there’s some controversy.

And so, these tools can be profoundly useful, but I think that we also need to really ask ourselves, what does this thing do for me and is it actually going to make my project easier, my life easier?

The other thing that I think is a pitfall that people should be aware of, again, with generative AI, Tressie McMillan Cottom just published an op-ed in the New York Times that I love, where she referred to AI as mid, that everything is just this middling technology. I think that’s a great way of thinking about generative AI because it churns through language and it’s designed to use probabilities to figure out. You give it a prompt and then it thinks, “Meh. What is the most likely response to this prompt?” So it’s going to be the thing that is most probable, sort of most mid. It’s not going to be particularly innovative, particularly cutting edge. It’s just going to give you a general, okay, average response.

And so, if you’re a researcher using these tools, that’s an implication that you want to think about in the hallucinations and, of course, the environmental tolls. And so, I think that those are really the four big pitfalls that I try to highlight again and again in my conversations with folks.

Alexa McClellan: That is such a great quote. I love the description of AI as mid. I think that really positions it in a different light. But as a follow-up, what role do you think that AI literacy might play in overcoming some of these issues that you just mentioned?

Sarah Florini: Yeah. And so, of course I favor my definition of literacy, which is moving beyond skills and competencies and really thinking about these things in context. If you take AI literacy to mean that, then suddenly it helps you demystify. It helps you get outside of the hype of the marketing.

Also, just we’re having this cultural moment where everyone is excited about AI. I think thinking about things in context, thinking about the social, cultural, historical, economic implications really helps you demystify and step away from the hype.

I do think, of course, for AI literacy, you do need to have some basic understanding of how these technologies work, that they’re trained on data and you need to think about where the data comes from and how it gets gathered and labeled. You need to think about the way machine learning works. Some basic overview of how these technologies work I think will be really, really useful.

So in addition to demystifying, I also think understanding the way that generative AI works, its limitations, and the way that it relies on making probable guesses about what is the most likely response to a prompt, I think that’s also going to be something that would really help people overcome any kinds of issues around using these technologies.

One of the things that we’re starting to see some data on is homogeneity and outputs. We’re seeing this with large language models. We’re also seeing it with generative AI that creates images.

And so, I think when people start to understand these things beyond just prompt engineering or skills or competencies or how do I use these things, it really empowers them to think about when and how they want to use them and when and how they don’t want to use them. I feel like part of literacy is being empowered to make choices about what is going to be the most effective tool in a situation, and sometimes that is to not use AI.

Daniel Smith: So if I’m approaching this from the perspective of an educator or a learner for that matter, and I’m just getting started in trying to increase my AI literacy, with all of those things that you just mentioned in mind, where would you suggest that I start and then where might I go from there?

Sarah Florini: So some things that maybe you could focus on. I do think that having a basic idea of how different kinds of models work and their limitation is useful, that pretty much everything that we are putting in the category of AI right now is some kind of machine learning or neural network technology that is trained on data.

So you can start thinking about where did that data come from? Whose data is it? What viewpoints are represented there? You can start thinking about the mechanics of how these things create inputs. Are they sorting through data? Are they looking for correlations? Are they making probabilistic determinations? You can start to get an idea of how these things work and how they might be beneficial to you or not.

One of the things that I do is I emphasize that these are products, and so that we need to be careful to think about when we are being marketed to and when we are getting information that is maybe not motivated by selling us a product or getting us to use a product. So I think that that’s a really important thing that educators and learners can start to think about.

Daniel Smith: A follow-up question to that, speaking of products, is that a lot of these AI companies are now rolling out what they call reasoning models, which purport to do like PhD-level analysis and things like that. So do you have any suggestions for how people think about these different levels of AI products that are being marketed to them?

Sarah Florini: Yes, I do, and that recommendation is skepticism. One of the things that happens is a new tool, a new product is rolled out. A lot of the specificity, the specifics, the technical details of that product are not made public obviously for business reasons. OpenAI doesn’t want to show us how they do what they do because then other people could copy them, their competitors.

And so, then a new thing comes out and there’s a lot of claims about it, about what it can do. Then six months to a year later, after the researchers have had a little time to get into it, to audit it, to reverse-engineer it, all of the limitations start coming forward. But by the time that happens, there’s a new product and a new thing that everyone is breathlessly excited about.

And so, as far as … I have seen OpenAI’s o1, the new GPT model that they say are reasoning models, I haven’t had a lot of time to dig into that. But the claim of PhD-level education or PhD-level thinking is something that AI researchers have been making for decades. I mean they were saying this about chemistry models in the mid-20th century. So this idea that something is a PhD-level intelligence, this is not a new claim, and most claims that are being made today are not new claims.

Of course, yes, the technology is more advanced now than it was the 1960s and ’70s, but the fact that every time there is a new thing, it is the thing. It’s now we have real intelligence, now we have PhD-level intelligence. We should be skeptical. Before you decide that the new model is just the best thing ever and is as smart as your graduate student, give it six to nine months and let the researchers work and poke at it and see what its benefits and limitations are, because it probably is more advanced than the previous model, but maybe it’s not able to do all of the things in exactly the way that we’re being told.

Alexa McClellan: So, Sarah, if our listeners want to dig a little deeper into this topic, are there any additional resources where our listeners can learn more about AI and the issues that you discussed today?

Sarah Florini: Sure. I mean if you want a basic technical understanding of the things that I mentioned, like data, training, how these models work, that’s pretty easy to find. A lot of organizations have put out really great explainers that you can find quite easily by searching for them. I would just recommend that you’re making sure that you’re getting it from a university, a nonprofit. People who are researchers don’t look for that information from the companies who are making the technology.

But beyond that, because my investment in literacy goes beyond just the technical aspects, I have a range of resources that I recommend to help people start understanding those social, cultural, historical, and economic intersections with technology.

One is the Distributed AI Research Group, DAIR, D-A-I-R, and they’re doing a lot of really great work about technology and AI and thinking about it outside of commercial markets. So their website has a lot of great resources, and they’re definitely people to keep your eye on.

They also have this great podcast called Mystery AI Hype Theater 3000. If you or any of your listeners remember Mystery Science Theater 3000 where they showed the old movies and made fun of them a little bit, that’s the inspiration.

And so, it’s a little bit cheeky, it’s a little bit goofy, but they really do a great job. It’s Emily Bender and Alex Hanna. These are both people who have deep technical knowledge in the field of AI, but also have training in linguistics and sociology. And so, they are great at going through the latest tool, the latest story, the latest whatever, and really deconstructing it in a fun way and explaining, “Here’s what the hype is and here’s what it can actually do.”

I would also recommend Paris Marx’s Tech Won’t Save Us, which is another podcast. They’ve covered a lot of topics you can go back through and you can find topics that you’re interested in about, how intensive data centers are or different … The economic models of AI companies and find a lot of good information there.

I recommend these resources because I think leveraging into those broader contextual ideas is a harder thing for people to do. But I think that it is really, really important for literacy, and these are interesting and fun resources. You can just pop on your headphones while you’re doing the dishes.

Alexa McClellan: Those resources sound really great. I’m going to look at the Mystery Science Theater one. I was a fan of Black Drone for sure. So sounds interesting. So thank you so much for those additional resources.

Daniel Smith: Yeah. We’ll absolutely be sure to include links to those in our show notes so that our listeners can check those out. So on that note, I know we’ve covered quite a bit of ground in our short time here today, but do you have any final thoughts that we’ve not touched on?

Sarah Florini: I guess I’ll just end by saying that I do recommend skepticism, and maybe I’ve seemed a little negative on AI throughout this. I do want to say that there are a lot of really impressive advances of technologies that are happening, and we are in an interesting moment around these technologies.

And so, I don’t want to take anything from that and from the people who are making these models. But I do think that we are also in a real profound moment of hype and that it’s important for all of us, but particularly researchers, educators, and students to develop the ability to see through that hype. I find that depending on where you get your information, the folks that are actually making these tools, making these models who are machine learning and AI experts tend to, when you talk to them, have a much more pragmatic view of these technologies.

It really is in more popular culture and in marketing that you’re getting this idea pushed that these are basically miracle everything machines that are going to solve all of your problems and be able to do everything for you. And so, I think that it’s important for us to try and shift who we’re listening to to the people who are a little more measured and have a better understanding of these technologies.

I think if we can do that, then everyone will be much more well-positioned to understand what AI is going to be useful for them and what maybe is going to give you 380,000 materials that you as a poor material scientist now have to go through, and it didn’t necessarily help you. So that’s where I would end.

Daniel Smith: I think that’s a wonderful place to leave our conversation for today. So thank you again, Sarah.

Sarah Florini: Thank you for having me. It’s been a great time.

Daniel Smith: I also invite everyone to visit citiprogram.org to learn more about our courses, webinars, and other podcasts. Of note, you may be interested in our Essentials of Responsible AI course, which covers the principles, governance approaches, practices, and tools for responsible AI development and use.

Before we wrap up, we have some exciting news to share. Alexa, who’s been with us on this episode, will be taking over as the new host of On Research. If you’ve enjoyed hearing from her today, be sure to check out On Research where she’ll be leading insightful conversations on the latest in the research world. With that, I look forward to bringing you all more conversations on all things tech ethics.

 


How to Listen and Subscribe to the Podcast

You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guest

content contributor Sarah Florini

Sarah Florini – Arizona State University

Dr. Sarah Florini is an Associate Professor of Film and Media Studies and the Associate Director of Lincoln Center for Applied Ethics.


Meet the Host

Team Member Daniel Smith

Daniel Smith, Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program

As Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.


Meet the Guest Co-Host

Team Member Alexa McClellan

Alexa McClellan, MA, Associate Director, Research Foundations – CITI Program

Alexa McClellan is the Associate Director of Research Foundations at CITI Program. She focuses on developing content related to academic and clinical research compliance, including human subjects research, animal care and use, responsible conduct of research, and conflict of interests. Alexa has over 17 years of experience working in research administration in higher education. Before joining CITI Program, Alexa served as the Assistant Director of Research Integrity at The University of Tennessee at Chattanooga. Alexa received her MA in English from The University of Tennessee at Chattanooga and her BA in English from Southern Adventist University.