Back To Blog

On Campus Podcast – AI in the Classroom (Part 1)

Season 2 – Episode 4 – AI in the Classroom (Part 1)

This episode is Part 1 of 2 of our conversation about AI in the Classroom. Part 1 discusses the definition of artificial intelligence in an educational context, current utilization, and the opportunities and challenges for educators.

 


Episode Transcript

Click to expand/collapse

 

Ed Butch: Welcome to On Campus with CITI Program, the podcast where we explore the complexities of the campus experience with higher education experts and researchers. I’m your host, Ed Butch, and I’m thrilled to have you with us today.

Before we get started, I want to quickly note that this podcast is for educational purposes only, and is not designed to provide legal advice or guidance. In addition, the views expressed in this podcast are solely those of our guests.

Today’s guests are Dr. Mohammad Hosseini and Dr. Michał Wieczorek. This episode is the first part of a two episode release. Dr. Hosseini is an assistant professor at the Feinberg School of Medicine at Northwestern University. And Dr. Wieczorek is an IRC Government and Ireland Fellow at Dublin City University.

Welcome to the podcast.

Dr. Michał Wieczorek: Thanks for having me.

Dr. Mohammad Hosseini: Thanks for having us.

Ed Butch: Of course. And today our discussion is going to center around AI in the classroom. But, before we jump into the topic, can you both provide our listeners with an overview of your backgrounds and your experience with education and technology?

Dr. Michał Wieczorek: My research currently is funded by the Irish Research Council, and I have a two-year postdoctoral project on the ethics of using AI in education, and specifically in primary and secondary education.

I am a philosopher, philosophy of education has always been very close to my heart. For example, I wrote my PhD dissertation on a different topic, but I extensively used the philosophy of John Dewey. He was a great philosopher of education, and it was just an exciting opportunity to work more on philosophical aspects of teaching and learning, as well as incorporating how contemporary technologies are changing practices and standards.

And, of course, I previously worked with Mohammad, who was a great collaborator and a fellow PhD student at Dublin City University. And since he has been doing great work on academic integrity in AI. It was just a pleasure to invite him to join me on this project and work together again.

Ed Butch: Wonderful. Thank you.

Dr. Mohammad Hosseini: Yeah. So, I’m Mohammad Hosseini. I’m an Assistant Professor at the Department of Preventive Medicine at Northwestern University in Chicago.

I’m trained in applied ethics, which basically means I apply principles, values, and virtues in real-life situations. I specialize in the ethics of research and research integrity. And within this small field, I’m primarily focused on ethics of scientific collaborations in publications, attributions of credit and responsibilities, citation ethics, open science, but, my most recent work has focused primarily on the ethics of using AI in research.

In terms of education, I teach. I lecture on research ethics and integrity. I’ve been doing that for more than eight years. And I teach that to students from diverse backgrounds, from engineers and social scientists to biomedical scientists, clinical students at different academic levels, but also people who are professional researchers here at the NIH or in other institutions. And that’s my experience with education.

Ed Butch: Fantastic. Well, thank you both. Obviously you both have great backgrounds that fit well with CITI Program and our focus on trainings around research ethics and compliance. So, I think that’s great.

But, I want to jump right into it. AI is an area that is still constantly evolving, obviously. So I want to get from each of you, how would you specifically define artificial intelligence in the context of education?

Dr. Michał Wieczorek: So it seems that you’re starting with the most difficult questions actually, because this is a problem that any AI researcher is going to tackle eventually. How do you define AI in general? And it gets even more problematic now that this technology is developing, because if you have Google Classroom, for example, they will have on their website information, they’re using AI in Google search and YouTube. And as such, they’re incorporating AI in Google Classroom. But, this is not exactly what we’re talking about.

We are talking about different kinds of contemporary techniques, digital tools using techniques such as machine learning, expert systems, large language models, and generative AI, et cetera. They’re specifically designed and deployed to facilitate teaching and learning. So they might, for example, take over part of the teacher’s job, let’s say assessment. Or they might provide personalized feedback and examples to students who are trying to learn specific subjects.

Ed Butch: Right, thank you. So I think a lot of these, sometimes we think of as more theoretical ideas. But, what are some ways that you’re really seeing AI currently being used in a classroom setting?

Dr. Michał Wieczorek: So some of the examples, probably the most common example will be ChatGPT, of course. Because there’s been a great debate about how large language models such as this one can be used in the classroom. And this is just one of them. But it’s important to remember that tools like ChatGPT, they were not designed as education tools.

At the same time, there are already companies that have technologies, or are developing technologies, specifically for the classroom. So, for example, the publishing company Pearson is aiming to extend its pilot program of AI textbooks. So text books that have a digital copy that use generative AI, for example, that generates new examples that the students can see in the learning or new problems to solve, right, for example, when learning maths.

You also have tools that are used to track and monitor students’ behavior or engagement or even emotions, which the latter being particularly problematic now, because the newly adopted EU AI Act is banning emotion tracking in the classroom. But, those kinds of systems, they for example, monitor students’ behavior or facial expressions to flag which are following what the teacher is saying, or how well they’re mastering the specific problem, but based on their emotion make up.

There are also tools that might offer automated feedback for teachers. For example, by monitoring their in-class activity and looking for specific context clues, or maybe giving them feedback on their pacing.

And of course so many others. There are tools that are meant to provide individual tutoring outside of classrooms. Several years ago, School of AI from China made a lot of noise in the media. Right now the Khan Academy is also developing its own one-on-one tutor called Khanmigo. So there’s many, many different applications and it’s really difficult to collect them all under one umbrella term of AI in the classroom.

Ed Butch: Definitely. I mean those are all extremely interesting. The textbook example, especially, I’m intrigued by, so it’s almost like would be a live version, basically, of a textbook that is being updated regularly as the AI learns?

Dr. Michał Wieczorek: That’s the idea. And different companies, of course would have different versions, but you already see textbooks being updated, let’s say every year. We have a new edition coming every year. Right now the idea is that the textbook is supposed to adapt to particular students.

Ed Butch: Interesting. Wow, that’s fantastic. All right, so I think obviously you covered a little bit of this, but a lot of times I think we are scared sometimes of new technology. So when you’re talking to others, what do you tell them as some of the primary benefits of integrating AI in the classroom?

Dr. Mohammad Hosseini: I’ll take that.

So probably the primary benefit is that they can augment teachers’ abilities and they can automate some tedious or time-consuming works like, I don’t know, admin. Everyone who has been involved in teaching knows how much administrative work might be involved, like, I don’t know, a lot of emails, a lot of back and forth with students. And a lot of AI systems can really help with those kinds of tedious and time-consuming tasks.

But also other tasks, like formulating exam questions. That is not something that teachers are really looking forward to for every exam, especially now with a lot of push for making oral examination a little bit more prominent. You cannot really ask the exact same question from 50 students who would come in different times to a session. You have to have a range of questions. You really need to prepare a diverse set of questions, and AI can really help in those kinds of tasks.

Preparing presentations and slides is another thing. Teachers who use slides of PowerPoint, they might always need images, they might always need new ideas for conveying certain concepts, and AI is extremely useful in those contexts.

It can also be helpful for grading, especially in cases where students submit their assignments digitally. AI can be super helpful for grading.

And all of these augmentations can help teachers do their tasks more efficiently and it can free up some time. Of course, just in any other context, the question is, “Okay, if they don’t have to spend so much time with grading or this and that, what are they going to do?” At the moment, the speculation is that, “Well, they will have more time to spend with the kids,” or, “They will have more time for one-on-one.” But given that this trend of digitization will actually deter students from being enthusiastic about that one-on-one interaction, we’re still not sure exactly how is this going to pan out.

There’s also speculations about how AI can improve assessments by reducing biases that teachers might have. There’s reports that certain teachers might be biased against students with certain backgrounds, or students that might not be coming from a well-off background, or come from a certain race, or something like that. It is assumed that AI would not have any of these biases.

AI can also improve teaching skills like teachers who are just starting, novice teachers. They can use AI as a sparring partner. One can even create a dummy cohort with ChatGPT and say, “Yeah, if I said this in the class, what could be 10 possible questions students of that grade could ask about this specific issue that I just raised?”

Things like that can really help teachers to think broader, and think in a more versatile way about the classroom and class management and troubleshoot their own curriculum.

It can also help in classroom management, like Michał was talking about. We can also think about the other side of it, which is almost creating a panopticon of that situation where everybody is constantly feeling they are being watched, and how that’s going to affect… I mean, this is something that we’ll talk later about it. But there’s many, many areas that AI can help, and I think we are all excited about it. We are all afraid of it too, but we’re all excited about the possibility.

Ed Butch: Yeah, for sure. It seems like there’s just these endless possibilities.

Then there’s always looking at that negative side. I think bias is important to think about, and I know we’re going to address that a little bit later.

But when we’re talking about especially technologies like this, and utilize them in a classroom, I always love to hear about some personal experiences from our experts. So do both of you, or either of you, have a success story or a positive outcome that you’ve seen from using AI, either in your own teaching or research?

Dr. Mohammad Hosseini: I use, in my teaching, when I teach Research Ethics and Integrity, I demo ChatGPT and Gemini. And that is a great tool in the sense that it allows me to tell students and researchers how not to use it. And it creates an interesting interaction between me and the class in the sense that we have a live example that we can all say something about it, and it creates room for engagement, for me anyway.

And it then helps problem solving and critical thinking, because no real student is there being named and shamed. It’s like an AI, it’s an agent, it’s a non-human agent. And so no one would take offense. Whereas if you bring a live example from, I don’t know, a student assignment into the class and say, “Oh, look at that. That part is problematic,” or, “That part is wrong,” or, “That part is off,” then that person might feel really bad about the fact that they’re being used as a bad example.

But when you use AI as that bad example, then you really remove shame from being at fault or from making mistakes. And in that sense, I think it’s been helpful in my classes.

Ed Butch: Right.

Dr. Michał Wieczorek: For me, I’m not a very visually gifted person in terms of preparing PowerPoint presentations, for example. I’m hopeless in terms of layout. But there are already existing AI tools that can help you with that, and I’ve been depending heavily on them. So in the sense, I can subscribe to Mohammad’s argument that they can help teachers in saving a lot of time that they would otherwise spend on the tasks that they might not particularly be good at or want to do.

Ed Butch: I mean, I completely understand that. I am not a graphic designer by any means, and so I always use the design elements in PowerPoint to help me make things look pretty. I totally get that.

So you brought up some of these, Mohammad, especially in terms of challenges. So what are really some of the concerns that you see educators face when trying to implement AI?

Dr. Mohammad Hosseini: That’s a major part of my day-to-day job, just trying to talk about and identifying them. And you can see some patterns also from the past that technologies just appear in the classroom. They are either brought in by teachers or maybe it’s some third party, like local government might invest in a specific tool, or the Minister or Department of Education might push a specific tool. But very often those tools arrive in the classroom first, and then teachers are supposed to adapt to them without necessarily having any teacher training advance to that. So this is a major challenge. How should teachers change their practices, especially since many of them have been teaching for 20, maybe 30 years, maybe even more. And they’ve been doing very well with the tools available to them, and suddenly this new thing expects them to adapt on the fly and change the way that they teach without often getting adequate training. So that’s definitely a major challenge that needs to be addressed.

At the same time, teachers will have to, and unfortunately this task might often fall on the hands of teachers, they might have to deal with data management and privacy issues, because you have all the kinds of data being by the students, and someone will have to be responsible for that. And for example, for parents and for student themselves, the first point of contact will be the teacher and not a nameless company headquartered in Northern California, for example.

Speaking of tech, tech companies. Teachers are already exposed to a number of different influences and pressures. For example, as I said, from the local government that might have its own agenda for education, or maybe the federal or national government. At the same time, parents are involved. There are different political or ideological concerns at stake. And now we will have powerful technology companies getting more and more involved in education. And it’ll be a challenge for teachers to navigate their own values and their own best practices, while also trying to benefit from the tools that will, of course, come with specific assumptions and specific values being embedded in them.

And on that note, there’s also the issue of full access. Because this is precisely the question that we have to address. Who do we think AI should be targeting? And is every student able to benefit from it equally? Because we all, of course, have socio-economic divides, we have digital divides that are going to change how different students from different demographics, for example, will interact with those tools, or they will not be able to interact with them at all because they might not be able to afford their own personal device, let’s say, that will be able to run such a program.

And there’s, of course, many other concerns, and we need to be mindful of them because this is still an emerging technology. So an important thing is that we do not yet know the extent of the challenges that are going to be faced by the teachers.

Ed Butch: Yeah, definitely. The list just seems to be snowballing for sure. I mean, that seems like a lot to put on faculty and teachers. So how can we address these types of issues to make sure there’s a positive learning environment, both for the students and really for the faculty and teachers as well?

Dr. Michał Wieczorek: I have two responses to the question, and I think that the first is most important one, is that we need to involve the teachers, the parents, and the students. Because if those kinds of technologies are just being developed in some kind of startup or maybe a bigger company, and they’re deployed in the classroom as they are, it is, of course, possible that they will respond to the needs of the people that will be actually using them later on. But by involving the teachers, the parents, and the students themselves, asking them what they want out of AI, how they would like to use digital tools, there is a greater chance that the decisions that are going to be made are going to be decisions that benefit those most intimately involved with those technologies.

Because currently many decisions surrounding the uptake of AI are being made without teachers themselves. They might be made by the designers, of course, who will just develop something, but they also might be made by principals, or even at the higher level, at the minister of department level. And teachers, parents, and students need to have their say, they need to express their concerns, and they need to have a chance to make sure that their hopes are also taken into account.

And another broader conversation I think we should be having is just the conversation about pedagogy, because I think many people do not stop to consider those kinds of questions about what teaching is, and what do you want to achieve through AI in the context of teaching and learning? Sometimes teaching is just framed as transmitting information, and learning is also framed as just acquiring some amount of knowledge that will make you a knowledgeable, learned person.

But there’s so many other things at stake, such as social development, and personal development, character development, moral education, democratic education, those are the things that we all need to account, because those are the functions that schools are performing. And we need to remember that AI should also accommodate them and leave space for them, even if it might focus on other aspects of education in a given situation.

Ed Butch: Definitely. That’s a great point. Thank you for that.

You’ve both given some examples as to how AI can really enhance the student experience, make it personalized, and making that learning experience better for them. I guess really a question I have is, are there really any particular AI tools currently that you find useful for this?

Dr. Mohammad Hosseini: Yeah, so I think in terms of content generation, generative AI systems have been really good in creating content that better suits different cohorts and groups of students. We know that they can be very useful in creating images, texts, and now more recently, video that can better relate with students with certain abilities or from certain cohorts. And that is the core of what Michał was talking about in terms of adaptability, to make sure that content is adaptable and is adapted easily to what specific cohorts might need to better learn and engage with the content.

And this kind of engagement and adaptive engagement is ultimately going to improve outcomes of education. If a student is better able to see themselves in an example, is better able to relate with an example that is provided by a teacher, be it, I don’t know, an image, be it a video, be it a story. They’re much more likely to learn what the teacher is ultimately hoping that they would learn. If it’s, I don’t know, a concept, if it’s an issue, if it’s a topic, they’re much more likely to relate to that and to learn that. And in that sense, I think just adaptability and engagement will ultimately improve outcomes.

Ed Butch: Great, thank you. And I just had a quick follow up there as well, because it’s honestly something that I don’t necessarily always understand. So I use AI as this broader term and everything like that, but you, of course, specifically mentioned generative AI. Can you just give us a quick rundown of, I guess, what that broader term really means and the different types of AI, I guess, that there might be out there?

Dr. Mohammad Hosseini: Right. So generative AI, basically, refers to systems that are equipped with large language models and can generate content. So they got pre-trained on large amounts of data, be it text, images, videos. They’re trained on large amounts of data with machine learning techniques and they can produce content as appropriate with the input that was provided by a user.

When it comes to education, there’s a lot of plugins. So the paid version of ChatGPT, for instance, offers users thousands of plugins that are actually deposited by and developed by other users. And some of those are extremely helpful for education. For instance, you can find a plugin that is basically a physics tutor and you can ask it to explain, for instance, I don’t know, the law of gravity as you could teach it to a five-year-old. You could ask it to explain the law of gravity as you could teach it in a graduate class at the highest level in, I don’t know, a country in the middle of Europe.

So depending on what you ask, you will get certain adapted and adjusted content. And this is the case for a lot of topics now. Like yesterday I saw something, a new plugin that was developed for teaching astrophysics. You can imagine that for someone who doesn’t have access to the most up-to-date resources about the content or doesn’t have access to a tutor, this will be an amazing tool to just self-educate. Now, whether the motivation is there, whether the right tools and hardware that might be required to complement and augment this is there, that’s a whole different question. But in principle, having this opportunity provided to a large number of students, or a large number of learners, who might not necessarily be students, maybe someone in their 50s wants to learn about astrophysics. In that sense, this is a great tool, because it provides them a chance to ask questions.

Like if I read a textbook, I cannot ask questions from a textbook after reading a difficult paragraph. But if you’re dealing with a generative AI system that is equipped with a plugin that is specifically trained on teaching astrophysics, you can ask 50 questions if you don’t understand something. And that is the core added value of using these systems, in my opinion, for one-on-one education, that it allows you to ask questions on the spot. Almost like having a tutor that you can ask as many questions, except in this case, this tutor doesn’t get annoyed, it doesn’t get tired, and it’s there for as long as you want it to be there.

Ed Butch: Thank you for that clarification. That helps a lot. Talking about training the system, I think, really leads well into this next part in terms of AI and ethics. And so what do you all see as some of the ethical considerations that educators should take into account when incorporating AI into teaching?

Dr. Mohammad Hosseini: I’ll start with the biases and I leave the rest to Michał.

So one thing educators, and anyone who uses these systems, should be aware of is that these systems can be biased. That is partly because they are trained on content that might have been biased. The other part is that they are using algorithms that might be biased. And because these systems are trained on large amounts of data, we don’t exactly know the sources of biases. And so this makes it difficult to trace sources of bias, or remove those biased sources from the corpus to make sure that the system is less biased.

And when it comes to education, this is extremely important, because if certain biases have existed in the corpus and are therefore propagated with the machine, then a whole generation could learn biased content. Like the Mercator project, I think is a great example, the biased and flawed image of the continents that we see over and over being propagated. That was something that was developed in the 15th century. And there’s still people who don’t know that Africa is 14 times greater than Greenland. But when you look at a globe, you don’t really see that because of that bias that has been propagated through generations and generations, who never really understood that this is actually a biased image. And so in that sense, these biases can be extremely dangerous, because they can define what future generations would believe to be ground truth, would believe to be an actual fact.

The other thing is that, and I recently wrote about this in a blog that I’m happy to share with you to share with your listeners, some of the biases are so blatant that they can easily be identified, but there are some biases that are so subtle that we cannot identify them. And those are the ones that we should be even more scared of.

Like, I recently used an AI system to generate an image of a Muslim researcher in a lab using artificial intelligence. What I received from the AI machine was the face of a bearded man with a mustache on the head and body of a hijab-wearing woman. So this is disrespectful.

Ed Butch: Wow.

Dr. Mohammad Hosseini: And it is both erroneous and biased. It’s erroneous because the system has grafted at the face of a man on the head and body of a woman. It is biased because the system believe that if it’s a Muslim, it has beard and mustache. That’s the bias.

Now, this is a blatant bias. This is one that is, because the content was a visual, we see that. But a lot of the times, if it’s in text, if it’s in a video, we might not necessarily see that. And those are the biases that we should be extremely cautious of.

And so this brings me to the point of, okay, if you want to use a system like this, you really have to make sure that you are a content expert. If you’re an educator, you really want to make sure that you’re a context expert, to the extent that you can identify these subtle biases. That is, I think, key. Now, I discussed briefly the bias. I leave the rest to Michał.

Ed Butch: That concludes part one of our conversation. Tune in next month to hear Doctors Hosseini and Wieczorek finish our discussion around the ethics of AI in the classroom and future trends that they see on the horizon.

I invite all of our listeners to visit citiprogram.org to learn more about our courses and webinars on research, ethics, compliance, and higher education.

 


How to Listen and Subscribe to the Podcast

You can find On Campus with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/1896915.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guests

Mohammad Hosseini, PhD – Northwestern University

Mohammad Hosseini is an assistant professor in the Department of Preventive Medicine at Northwestern University Feinberg School of Medicine. Born in Tehran (Iran), he holds a BA in business management (Eindhoven, 2013), MA in Applied Ethics (Utrecht, 2016) and PhD in Research Ethics and Integrity (Dublin, 2021).

Michał Wieczorek, PhD – Dublin City University

Michał Wieczorek is an IRC Government of Ireland Fellow at Dublin City University. His project entitled “AI in Primary and Secondary Education: An Anticipatory Ethical Analysis” deals with prospective developments in the use of artificial intelligence in education and their ethical impact.

 


Meet the Host

Team Member Ed Butch

Ed Butch, Host, On Campus Podcast – CITI Program

Ed Butch is the host of the CITI Program’s higher education podcast and the Assistant Director of Content and Education at CITI Program. He focuses on developing content related to higher education policy, compliance, research, and student affairs.