Back To Blog

On Campus Podcast – Generative AI in Higher Education

Season 1 – Episode 55 – Generative AI in Higher Education

Generative AI is reshaping higher education, tailoring learning materials, streamlining administrative tasks, and supporting research. It personalizes content for each student and provides rapid assignment feedback. Yet, challenges exist – privacy concerns in student data analysis, content quality and bias issues, costly implementation, and faculty training. Balancing automation and human interaction is vital. Despite these hurdles, generative AI promises to revolutionize higher education, offering more efficient, personalized, and data-driven learning.

 


Episode Transcript

Click to expand/collapse

 

Darren Gaddis: From CITI Program, I’m Darren Gaddis, and this is On Campus Today. Today I spoke with Sukrit Venkatagiri, an assistant professor of computer science at Swarthmore College. As a reminder, this podcast is for educational and entertainment purposes only. It is not intended to provide legal advice or guidance. You should consult with the organization’s attorneys if you have questions or concerns about relevant laws and regulations discussed in this podcast. Additionally, the views expressed in this podcast are solely those of the guest and do not represent the views of their employer. Hi, Sukrit. Thank you for joining me today.

Sukrit Venkatagiri: You’re welcome, Darren. I’m glad to be here.

Darren Gaddis: To get us started today, what is your educational and professional background?

Sukrit Venkatagiri: So I completed a PhD in Master’s in Computer Science at Virginia Tech with a focus in human computer interaction. So essentially I studied how to design technology to support large scale collaborations. More specifically, my dissertation research focused on effectively and ethically leveraging online crowds to scale up investigations in journalism and human rights activism. I’ve also worked at Facebook and Microsoft Research, and I most recently completed a postdoctoral fellowship at the University of Washington Center for an Informed Public.

Darren Gaddis: Our conversation today is going to be about all things artificial intelligence. To help ground our conversation today. What is artificial intelligence or AI and how does AI differ from generative AI?

Sukrit Venkatagiri: Yeah, I am happy to talk about that briefly. So AI is a field of computer science that is concerned with simulating human intelligence, specifically tasks such as reasoning or learning, interacting and oftentimes predicting. So you might have a language model that learns specific patterns in text and can be used to predict the next word in a sentence. So that’s auto complete, or today’s higher-end sophisticated version, ChatGPT. Or you might have a computer vision model that’s trained on images of animals that can distinguish between different types of images, or again, a more sophisticated example could be self-driving cars and the cameras that they have built in.

Generative AI, on the other hand, is artificial intelligence that’s capable of creating or generating new content instead of predicting what comes next in a sequence of events or separating or distinguishing between a given piece of content. Generative AI could generate text and images or video and video games using AI techniques. And so they use similar underlying techniques, that is, learning the patterns and structures of input training data. But what’s different is that they then generate new data that is similar characteristics to the input data.

Darren Gaddis: I’m familiar with how students might engage with AI, but how might administrators, faculty, and staff engage with AI in ways which might potentially be different or similar to that of students?

Sukrit Venkatagiri: So I’m a computer science faculty member, and so I use it to help me with my coding. Sometimes I try to use ChatGPT for brainstorming and ideating. I don’t necessarily use it for writing, which might be a difference. I value the creativity of my writing, whereas maybe sometimes students, if they’re on a deadline or something, they may try to use ChatGPT to fill in the blanks for certain assignments or something like that.

Darren Gaddis: Recently there has been some concern within higher education regarding third party vendors on college campuses and specifically generative AI. Could you briefly explain what’s been going on over the past several months?

Sukrit Venkatagiri: Yeah, I’m happy to. So maybe taking a step back in time from generative AI, I think there have always been concerns with third party vendors on college campuses. From concerns over the cost of Google Workspace and Microsoft Office Suite to Piazza selling student data without their consent. So maybe to explain the earlier concerns briefly, I think with Google Workspace and Microsoft Office, they often provided free or extremely low cost services, such as Google Drive or Microsoft OneDrive to college campuses to get students to use their software. And the companies always knew, or at least I think they knew that it was an untenable business model. And so in the last couple of years, they’ve actually increased the prices on these software suites, which has made universities actually decide to not use these software suites.

More closely related to what we’re talking about today would be Piazza, which was actually using student data that they turned in on their online form, the Q&A form, and selling it without students’ consent and without even faculty members’ consent.

And so coming back to generative AI, I think there’s similar concerns around privacy and consent. Universities are unable to control how these vendors store use and transform their data that’s generated by using these third party services. So for example, Zoom recently updated and they quickly walked back their privacy policy where they had indicated that they could use any data such as conversations, transcripts, perhaps even video content as training data for generative AI models that the company could sell the data to third party companies or use it for themselves in these models, but they actually walked back on this.

Darren Gaddis: With this information in mind, are there any benefits to higher education institutions utilizing generative AI from third party vendors?

Sukrit Venkatagiri: Yeah, I think there’s several benefits to being able to use AI. I think primarily the first one is generative AI tools and AI tools more broadly are here to stay. They can increase people’s productivity, they can increase people’s creativity, and so the faster students can learn to use them ethically and effectively, I think the better it will be for them for their personal and professional development.

One example that I use is GitHub Copilot. So it uses generative AI techniques to help you write code faster. And oftentimes if you’re a software engineering or a computer science major and you want to become a software engineer, if you can write code faster, that’s actually better for you in the workplace. ChatGPT could be used in a similar manner for writers. So a lot of people think ChatGPT may be used to replace writers, but it can also be used to augment writing or writers themselves.

Darren Gaddis: On the flip side of that, are there any potential concerns for colleges and universities as it relates to utilizing generative AI?

Sukrit Venkatagiri: I think the primary concern that I’ve seen that faculty have and that I’ve had myself, is that we often lose control over how our data and our students’ data is stored and used. This could be potentially a violation of FERPA because the data that exists between a faculty and the student or even a student’s own data is often considered private to them. And it also may violate student’s own individual, everyone’s own individual right to privacy. So we often think that we might be using the software to communicate with someone and that nothing is being stored, but what if in reality every second is being recorded and sold to another company? That’d be a huge privacy violation to say the least.

So returning back to the Zoom example or any third party video conferencing software. Hypothetically, they could sell this data to third party vendors or use it to develop new features within their product, like creating a digital twin of yourself to attend meetings. And coming back to Zoom specifically, I think after a bit of public outcry, they actually clarified in their policy that they would only use and retain metadata, not data for their AI models. So think meeting times and durations and not actually content like audio and video.

Darren Gaddis: With this Zoom example in mind, and other third party vendors, as AI continues to be an evolving and evermore important space within higher education.

Sukrit Venkatagiri: So I think another concern with generative AI technology is that we might see it as a replacement for human labor or a crutch that we might lean too heavily on. And so I think it’s important to remember that as students, but also as people in the workplace, the goal may not necessarily be to replace our own labor, but rather to help us do what we want to do better and maybe a little bit faster. And especially as it comes to college campuses and educators, it’s really important to understand how these tools affect students’ learning because if the goal of a writing assignment or a program is programming assignment is teaching someone how to write or how to program, if someone ends up using a generative AI tool, they may never learn that valuable skill in the first place. And then when they go into the workplace, that actually becomes detrimental when maybe the generative AI model doesn’t work as well. Or maybe it isn’t trained on data that can help you with that particular task that you’re trying to do.

Darren Gaddis: As AI continues to evolve and become even more important within the field of higher education, what can institutions do to ensure confidential and sensitive information for not only research participants but also students and faculty members, administrators and staff is kept secure?

Sukrit Venkatagiri: I think at least I feel like there’s only two ways to ensure that happens, maybe three. The first one is not using third party services. You can refuse to use these services and therefore all the data that you have is kept secure because it doesn’t go outside of campus. A more moderate example might be, or more moderate solution might be to require these third party vendors to store this data in a way that is kept on site within that university or in a way that university has control over that data. Maybe a more loose model is if you don’t necessarily want to or are able to have the infrastructure to store data, you could consider legal protections that are in place. Having contractual obligations, making sure that FERPA is actually implemented, or these contracts are in compliance with FERPA. And I think the universities themselves maybe should have the right to audit these third party vendors, and maybe even the government should play a role in this. So making sure that data that is meant to be secure is actually kept secure.

Darren Gaddis: What else should we know about utilizing third party vendors and generative AI within higher education?

Sukrit Venkatagiri: I think coming back to what I was saying earlier in terms of the concerns and the benefits, I think we shouldn’t embrace or think of generative AI from a position of fear, but rather think about it from a position of curiosity. “Oh, this is cool, but actually before I use it, let me take a look under the hood, how is my data being used and stored for how long? And by whom and where can I delete my data? Do I really need to use these tools in the first place? And what are the pros and cons to both my own professional development, but also what are the ethics around how these tools are being used outside of the workplace? Maybe the people who are involved in the creation of data for generative AI weren’t paid fairly for their labor.” So I think it’s really important to ask these questions before you start to use a tool, and then before it becomes too late and you realize your data is actually being stored halfway around the world and being sold to a third party company.

Darren Gaddis: Thank you for joining me today.

Sukrit Venkatagiri: Thank you, Darren.

Darren Gaddis: Thank you for listening to today’s episode. And be sure to follow, like, and subscribe to On Campus with CITI Program to stay in the know. If you enjoyed this podcast, you may also be interested in other podcasts from CITI Program, including On Research and On Tech Ethics. Please visit CITI Program’s website to learn more about all of our offerings at www.citprogram.org.

I also invite you to review our content offerings regularly as we are continually adding new courses, subscriptions, and webinars that may be of interest to you, like CITI Program’s AI and Higher Education webinar. All of our content is available to you anytime through organizational and individual subscriptions.

 


How to Listen and Subscribe to the Podcast

You can find On Campus with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/1896915.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guest

content contributor Sukrit Venkatagiri

Sukrit Venkatagiri, PhD – Swarthmore College

Sukrit Venkatagiri is an Assistant Professor in the Department of Computer Science at Swarthmore College. His research interests are in social computing and mis/disinformation studies, where he explores the ethical design of technology.

 


Meet the Host

Team Member darren gaddis

Darren Gaddis, Host, On Campus Podcast – CITI Program

He is the host of the CITI Program’s higher education podcast. Mr. Gaddis received his BA from University of North Florida, MA from The George Washington University, and is currently a doctoral student at Florida State University.