Season 2 – Episode 5 – AI in the Classroom (Part 2)
This episode is part 2 of our conversation about AI in the Classroom. The discussion continues as we examine AI’s ethical and bias implications and consider what the future might look like in the classroom and in research.
Episode Transcript
Click to expand/collapse
Ed Butch: Welcome to On Campus with CITI Program, the podcast where we explore the complexities of the campus experience with higher education experts and researchers. I’m your host, Ed Butch, and I’m thrilled to have you with us today. Before we get started, I want to quickly note that this podcast is for educational purposes only and is not designed to provide legal advice or guidance. In addition, the views expressed in this podcast are solely those of our guests.
Today, we continue our conversation with Dr. Mohammad Hosseini and Dr. Michał Wieczorek around AI in the classroom. We pick up with Dr. Wieczorek talking about the ethical and responsible use of AI.
Dr. Michał Wieczorek: The ethical issues, of course, are connected to the challenges that we discussed, because one of the ethical issues is how to balance the different needs, duties, and obligations of different stakeholders. So the company’s interests need to be taken into account, but arguably more important is that the students need to recognize and the AI and education actually benefits them.
For example, education is an interesting case because the whole point of this process is to change the beliefs and knowledge of the students, and we usually have respected and trustworthy professionals engaging in this kind of influence. We have teachers who have gone to years of university, they have years of professional experience, and they have professional codes of ethics, et cetera, that will tell them how to exert influence over students so that they allow them to grow without manipulating them.
How about an AI system where the people who develop them, they might not be as aware of the kind of ethically breaches related to autonomy as the teachers might be in the context of education, but also of course the influence from an impersonal system such as an AI is different than an influence from a trusted role model such as a teacher. So this is the balancing act that we have to think about.
What is an appropriate influence for a human to have over a child, especially a child, especially a minor, and what is an appropriate amount of influence for a potentially biased AI system to have over the same child? Because education is also a major engine of social change. If you change what is being taught in school and how, the shape of society for years from now will also change drastically.
So we have to also factor into how is AI going to impact civic education and democratic education? Is it going to shape the kinds of citizens who rely on cooperation, for example, which is frankly the kind of society that I would want. Or maybe it’ll promote a self-interested, consumer-oriented citizens who might be more beneficial for the bottom line of a select few corporations. And similarly, what about issues related to shared problems that we have to tackle? Climate change is a massive crises that we’re facing, and the way AI is going to teach about issues surrounding climate is going to change how people respond to it.
At the same time, educational AI is not benign in this context because there is more and more research on the enormous energy expenditure required to train a model and then later use it. All the data centers, so all the data that is needed to train the AI, as Mohammad explained, it has to be stored on computers that need to be powered up, they need to be cool, and that takes energy. If you want to store that data, if you want to process that data, you need computers which are built using rare minerals that just had to be extracted from somewhere. And they’re very often extracted from developing countries with people who are not paid equally and who are facing the most dire consequences of environmental destruction. So this is also the kind of balancing act we have to keep in mind when we’re talking about deploying AI in schools.
It might help a bunch of students in well-developed countries such as the US, such as Ireland, such as Poland, France, and many others, but what are the wider planetary costs of this potentially great benefits for the few? Other issues connect, for example, to the question of transparency and interoperability. AI is an engine for making decisions, so it will, for example, grade A student’s essay according to specific criteria, or it might generate some examples and images according to specific criteria.
But I would say more often than not, an average user of AI tools has no idea how AI arrives at such decisions. And what if you’re being treated wrong? In case of a human teacher you can always ask, you say, “Mr. Smith,” or, “Mrs. Johnson, why did I receive an F on this essay? I thought I did everything correctly.” And there’s the question, can you get this kind of feedback from AI? Is it the feedback that you’re going to be able to understand? Are the metrics and criteria used criteria that will be understandable to an average student, for example.
And very often than not, companies have an incentive to keep this hidden because they’re competing with many other companies, and if they made all their codes publicly available and all the basis for the decision that AI is making, then the competition will be free to swoop in and just use it or improve it in their own models. And also, even if we do disclose everything, there is no guarantee that we’ll be able to make sense of it. Because you can see that with so many other technologies or just knowledge in general, sometimes just laying out the facts, it’s not enough for people to make sense of the fact. There is an issue of visual literacy, general literacy that is not equally distributed in the society. So different people will be able to learn different things from the same kind of information.
And that leads me to I think the final challenge that I would like to raise here, the final issue, that is the issue of accountability. And I already said that teachers who the first point of contact for students and parents, but if something goes wrong, let’s say an essay is misgraded, or a racist image is generated by an AI textbook and displayed to a student causing distress and impacting their confidence and just causing all kinds of psychological harm, who’s responsible for that?
Because as a parent, your first instinct might be to go to the teacher and say, “Look, in your class, my child was shown something very disturbing and you should address that.” Well, the teacher might not be able to do it, first of all, but at the same time, the teacher has no control sometimes over what the specific AI tool is going to do. Are you going to accuse the company? Are you going to accuse the developer? What about issues where you are based, let’s say, in Europe, but your AI tool was developed in the US or in China and you have different legal jurisdictions in play. And just trying to get accountability for even simple, honest mistakes would be a nightmare for anyone involved.
So those are the kinds of issues that we need to address and discuss before AI is fully deployed in classrooms.
Ed Butch: Wow, some fascinating points there. I mean, the environmental and cultural issues are things that I never would’ve even thought about, so that’s great. And honestly, I think you probably made the best case that I’ve heard in terms of a fear that many people have in terms of AI fully replacing people, that the fact that that’s not going to be the case because you still need the teachers there to be able to have those conversations, to be able to make those points for the students as well.
Dr. Michał Wieczorek: I think the important thing to remember is that AI is very good at the very specific narrow set of tasks, and often better at us. Like I already mentioned my graphical inaptitude. It’s much better than me at generating images than I am, and there’s so many other tasks where AI is better.
But I think general argument in favor of us humans is that we are decent at a wide number of things and we come at all kinds of tasks holistically, which AI cannot do definitely yet, and in my opinion, won’t be able to do in a foreseeable future just because of how it’s developed.
Ed Butch: Definitely, definitely. Looking ahead, as this is starting to become more popular obviously and being incorporated into classrooms, for all of the educators that are listening out there, what are some of the skills and knowledge areas that they really should be focusing on developing for as this really starts to become more popular?
Dr. Michał Wieczorek: I think media literacy is going to rise in importance. Like 15 years ago, if someone said that they were doing media studies as a major they would usually be laughed at, that’s a useless major. You’re just going to watch TV shows and read newspapers, that’s not real field of study.
But the recent misinformation, disinformation crisis with social media has shown that this kind of media literacy is important, and AI is also a form of media, form of transferring information. So teachers will need to teach basic media literacy to students as applied to different kinds of media. And they will also themselves have to deal with data of different kinds. They will have to deal with textual data, visual data, video data, and numerous technologies that will be employed in the classroom. And they will have to make sense of how they are all connected, how they impact the students’ experience in the classroom, in the learning process. And they also will need to assess how AI works, which I already said is a challenge. And Mohammad also mentioned the fact that they will need to understand why AI might be biased in a specific instance and why it doesn’t work as intended. Also, why it doesn’t work as intended and how to navigate those issues.
So on that note, technical skills are also going to be important in a skilled portfolio of every teacher because there’ll be a lot of troubleshooting, and out of maintenance that might become the responsibility of teachers. If you think about schools in so many countries, teachers are underfunded, the schools are underfunded, there’s a shortage of teachers as well. Most likely in many situations, it will be some random teacher that’s going to be tasked with the enviable job of making sure that all the cables are plugged in correctly, but also all the updates are installed and the data sets are up-to-date, et cetera. It’s going to be just an enormous amount of work for everyone involved.
So I share Mohammad’s hope that some of the tedious parts of the job might be automated, but doesn’t mean that those tedious tasks won’t be replaced by other tedious tasks.
Ed Butch: Right. I mean, as you were talking there, I come from a liberal arts background and I can really see what we think of as a liberal arts education that all students are taking in things, that some of these newer technologies and technology courses can really become a part of the new liberal arts and to where it’s headed. I think that could be really important for universities.
So when I have experts like yourselves on I always like to look into the future a little bit, and I know you’ve both mentioned some of these things, but I guess what do you see really as some future trends with AI and education?
Dr. Mohammad Hosseini: Sure. I think some of the future trends pertain to digitization. This incorporation of AI and integrating AI in education is, for me, just one of the side effects of further digitization of whatever it is that we are doing as humans. In all aspects of our work digitization is a major trend, and I think this integration of AI in education is following that trend as well.
Another trend might be further decentralization of education. This was one of the things we discussed with Michał a few weeks ago about the impact of AI in education when it comes to people who homeschool their kids. It can have a major impact. AI can have a major impact on those people who homeschool their kids because it might make them less prone to using mainstream resources. It might help them use AI in creative ways that they think is appropriate. Of course, that comes with certain side effects that, for instance, at the moment we are not able to forecast, but I think it gives them more autonomy and agency in terms of whatever it is they want to educate their kids with.
And also, it can help people to educate themselves. Like all those people who are now using formal education, like adult students who decide to follow a certain topic at a later age in their life, they can now use these systems to really educate themselves in whatever topic they want. One thing I can foresee is that maybe educational institutions might decide that instead of being both the provider of teaching and also the assessor, they might be more likely just to be the assessor.
For instance, this is one of the things you see with a lot of language teaching institutions like IELTS, I-E-L-T-S, that’s connected with the British Council, they don’t teach people how to speak or write or whatever, they only assess them. And I think this is one of those things that can become also the case in formal education, where institutions decide that, “I don’t care how you learn what you learn, but I know how to assess this certain topic and I know how to accredit your expertise, and that’s what I’m going to do.” So I think that might be another trend that you might see in this domain.
Dr. Michał Wieczorek: But at the same time, this kind of decentralization might lead to more concerning trends in the sense that you will have those big multinational technology companies developing products that are-
Dr. Mohammad Hosseini: Exactly.
Dr. Michał Wieczorek: … being [inaudible 00:14:48] in thousands of different contexts. And on the other side of the equation you have, for example, individual homeschooling parents or those self-learning people who just later want to have accreditation from a specific university, you will encounter in their own power differentials between individual learners and enormous companies that will control what is being taught, how, and why.
And then there’s also, of course, the question of platformization standardization. Because we’ve seen that with social media where it started, we had several actors playing. Some of them just dropped out and suddenly we are stuck with basically Facebook and Twitter as the two main social media companies. There’s a good chance a similar thing will happen with AI in education, and of course it’ll allow them to scale up their products and develop more and more advanced tools which might benefit everyone at the same time, but ultimately limit the choice for us as learners, consumers, depending on how you frame it.
Dr. Mohammad Hosseini: And in a way, this was definitely the case for centuries. Like religious institutions had a monopoly over education for centuries. Once I think the governments became stronger and more centralized and decided to start formal education and take it away from religious institutions, I think this happened in Europe in 16th century, somewhere, where in certain parts in Austria and current-day Germany with Habsburg Dynasty, I think they started this. This has always been the case.
Even now, one can reasonably say that governments have a monopoly over the curriculum and what the kids are learning and what they are not learning. There’s major debates here in the US about what kind of books kids should read at school or what kind of books the libraries should lend kids. And one can also say that at the moment it’s the government that is this abstract entity somewhere in the capital that defines what kids in every corner of the country are learning.
So in a way, the world we are living in right now is not completely free, but I do get your concern that once the reins are in the hands of corporations, then we might be in a greater danger that, again, we cannot foresee right now.
Dr. Michał Wieczorek: And to be fair, the governments are getting involved trying to protect, as you say, their monopoly in education. For example, Sweden recently announced that they’re going to ban tablets and laptops in primary schools and go back to handwriting and books. And I remember recently saying that the French government is also speaking quite skeptically about AI in education.
So I think in the coming years, we’re going to see more of this rivalry between public and private bodies aver who’s going to determine what is being taught and how.
Ed Butch: That’s a really interesting dev. I’m sure we could do an entire other episode on that in general. Going from the religious to government battle, I guess that there probably was many centuries ago now to that government versus tech companies battle of what is actually being out there. So that’s really intriguing, definitely.
I can’t start to wrap up the episode without getting some advice from each of you. And so for those teachers or faculty or researchers that are out there listening that maybe want to get ahead of the game and want to really start incorporating this, what advice do you have for them as they start to consider integrating AI in the classroom or within their research?
Dr. Mohammad Hosseini: I guess one suggestion I have for them is to try and think about 30 years ago when internet was just becoming a thing and try to imagine themselves as a teacher 30 years ago, knowing what they know about the capabilities and possibilities that internet has created. Think about it that way. How would you have acted 30 years ago, given the knowledge of how powerful and how important internet will be, about the internet? If you were a teacher 30 years ago, how would you have educated yourself? How would you have approached it?
My impression at the moment is that there is a generational gap, and this has always been the case in terms of adopting technology and learning about technology. But this thing is going to be as big as the internet probably for the education. Or at least my cautious self wants to take it that way, because if I don’t, then I might be left behind the pack. I might be one of the people who never learned how to use the internet or were so late in the game that they were completely behind and they were lagging. Once everything was digitized and was only available using the internet, they were incapable of catching up because of how big the gap had been.
So my advice is engaged with it, use it, but also don’t forget to engage with schools, families, and children. As Michał was talking about, if you are a teacher of, I don’t know, elementary school and you want to use this in your class, it won’t hurt if you ask parents, “What do you think about this? Is it okay if I use this?” Of course, don’t input students’ data into it or don’t create profiles for students using generative AI just yet, but start small scale. Start with really small experiments and see how that goes. And also involve the parents and the kids. “Hey, kids, we use this new system in the class. How do you feel about it? Was it any better? Was it better than what I used to do until yesterday?” These kinds of engagements and keeping everybody involved, everybody who has a stake involved in those decisions I think is definitely what I would suggest.
The other thing is educating themselves. There’s all kinds of trainings, workshops. Like the library in the little town I live here, they have all kinds of programs for teaching AI to anyone who’s interested in. There’s all kinds of workshops and online trainings for different levels about the use of AI in different topics. Just a simple Google search is going to take you to so many rabbit holes about AI in your own very specific context. If you’re a math teacher, if you’re, I don’t know, a French teacher, whatever teacher you are, there is content about it, you just need to search for it and educate yourself.
I think also having conversations about it with peers is very helpful and forming communities around it. If you are a French teacher, you are much more likely to have a fruitful conversation about use of AI in your own context with another French teacher, not with me. I am a researcher in ethics and integrity, and I’m much more likely to have a very fruitful conversation about the use of AI in my own context with a peer. So again, peer conversation I think is super helpful.
And just think about it like the internet, you don’t want to be left behind. So join the bandwagon, even if it seems tacky or I don’t know, whatever. I know that it’s now a cliché, everybody wants to talk about AI. Yes, because it is important and it is big, and it is going to change the future. So if you don’t want to be left behind, join the bandwagon.
Dr. Michał Wieczorek: To give a counterpoint and to [inaudible 00:22:06] discussion a little bit, I would say that my advice would be there’s nothing wrong with taking it slow and being cautious, and it’s completely fine not to use AI. If you’ve looked at the advantages and disadvantages, you’re not convinced. There may be different products that do different things, not every single one of them might be for you as a teacher. And I’m not advocating for going to the server room in your school and burning it down, not adopting tools that you’re not sure are doing the thing that you want them to do.
I always tell my students, and when I do some public engagement, it is very often that you ask yourself about the goals and priorities you have for incorporating specific tools. Basically, what do you want to accomplish? How do you want to accomplish this thing, and why do you want to do it? And you can ask similar questions about the AI tool you want to incorporate in your practice. What will it actually accomplish, how it will do so, and why it will do so. And you will often learn that the answers to those questions are not the exact same. The crucial skill of navigating the landscape of new technologies is trying to bring those answers closer together so that AI or any other kind of tool actually will accomplish what you want it to do, and will do so in a way and for the reasons that you can stand behind.
So sometimes taking it slow, but not in the means of, “I’m not going to do it. I’m skeptical,” but rather, “I’ll wait and see … I will also actually look at those tools and examine them and will make a decision when I’m better informed instead of rushing head-first into the situation.”
Ed Butch: Yeah, great. Thank you, both. And Mohammad, I love the reference to the internet, thinking about that from 30 years ago. And Michał, a lot of that really boils down to training yourself, teaching yourself, getting that done personally, and seeing how it works and getting comfortable with it, rather than necessarily just jumping into it and putting it right into your classroom teaching as well. I think both fantastic points.
So thank you, both. This has been fantastic information. And before we wrap up, I just want to give you each some time if you have some final thoughts that you want to provide for our listeners.
Dr. Mohammad Hosseini: I think to go back on what we were talking about, I’m of the view that technology cannot be stopped, especially now that AI is available to anyone. It’s like when desktop computers and laptops entered homes. For a long time they were in the form of mainframes owned by institutions and this and that, but there was a tipping point where they were affordable and anyone could have them. We are there in terms of the evolution of AI technology. It is in our homes. It is at our fingertips. People who know zero programming, like computer programming skills, can use AI to program. That is phenomenal. I cannot fathom this technology being stopped.
And on that note, I think it is to our best benefit to educate ourselves about it, to try to use it, again, small scale, cautious, being aware of its limitations and all of that, but keep using it because if I don’t use it, I will be behind the pack and I don’t want to be behind the pack. I don’t want to be that person who’s unable to even talk about generative AI in three or four years time. I mean, Luddites were the losers in the end and I don’t want to be a Luddite, and I don’t want teachers to be Luddites because they are the ones that are nurturing and nourishing the future generation.
That’s my take on it. Again, educate yourselves on it, use it cautious, be aware of its limitations and shortcomings because there are many, there’s a long list of it, and we are hoping that we will be able to mention some of them in our forthcoming systematic literature review. And again, thank you so much.
Ed Butch: Great.
Dr. Michał Wieczorek: Thank you as well. My final point I would like to tie back to education because I try to hint at it, and philosophy of education is also a social philosophy and also a political philosophy. And you cannot think about education in general without thinking about the shape of society, about political issues, and the ethical issues. Because any change introduced into schools is going to have ripple effects. And it will change how we ultimately relate to each other, how we think about our participation, how we think about what’s important and valuable to us.
We have to remember that because of that, the use of AI in schools is a special case, where perhaps maybe more caution or definitely more attention is required. And educators over the years have been great at responding to challenges and educating themselves first before passing on the knowledge to others. It is important that we do not treat the deployment of AI in education as a technical challenge as we just need to develop better tools. We have to ask ourselves what we want to achieve through schooling and what kind of citizens and persons we ultimately want to create, and not just what is the most efficient way to make sure that primary schoolers know their multiplication tables.
My goal for the next series also to make sure that the other part of learning, the social, personal development, and moral development, is not put to the side just for the sake of greater efficiency of acquiring knowledge.
Ed Butch: Wonderful, wonderful. Well, that concludes our conversation for today. I’d like to once again thank doctors Hosseini and Wieczorek for sharing your expertise with us.
Dr. Michał Wieczorek: Thank you again. Thank you so much. It was a lovely conversation.
Ed Butch: I invite all of our listeners to visit citiprogram.org to learn more about our courses and webinars on research, ethics, compliance, and higher education.
How to Listen and Subscribe to the Podcast
You can find On Campus with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/1896915.rss” into your your podcast apps.
Recent Episodes
- Season 2 Episode 4: AI in the Classroom (Part 1)
- Season 2 Episode 3: RealResponse: Anonymous Reporting on College Campuses
- Season 2 Episode 2: Faculty Connections in Online Learning
- Season 2 Episode 1: Student Mentorship
Meet the Guests
Mohammad Hosseini, PhD – Northwestern University
Mohammad Hosseini is an assistant professor in the Department of Preventive Medicine at Northwestern University Feinberg School of Medicine. Born in Tehran (Iran), he holds a BA in business management (Eindhoven, 2013), MA in Applied Ethics (Utrecht, 2016) and PhD in Research Ethics and Integrity (Dublin, 2021).
Michał Wieczorek, PhD – Dublin City University
Michał Wieczorek is an IRC Government of Ireland Fellow at Dublin City University. His project entitled “AI in Primary and Secondary Education: An Anticipatory Ethical Analysis” deals with prospective developments in the use of artificial intelligence in education and their ethical impact.
Meet the Host
Ed Butch, Host, On Campus Podcast – CITI Program
Ed Butch is the host of the CITI Program’s higher education podcast and the Assistant Director of Content and Education at CITI Program. He focuses on developing content related to higher education policy, compliance, research, and student affairs.