Back To Blog

On Tech Ethics Podcast – Recent Developments in AI Regulation

Season 1 – Episode 14 – Recent Developments in AI Regulation

Discusses recent developments in the regulation of artificial intelligence.

 


Episode Transcript

Click to expand/collapse

 

Daniel Smith: Welcome to On Tech Ethics with CITI Program. Our guest today is Brenda Leong, who is a partner at Luminos.Law and an adjunct faculty member teaching privacy and information security at George Mason University. Today we are going to discuss the recent developments in the regulation of artificial intelligence.

Before we get started, I want to quickly note that this podcast is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have any questions or concerns about the relevant laws and regulations that may be discussed in this podcast. In addition, the views expressed in the podcast are solely those of our guests. And on that note, welcome to the podcast, Brenda.

Brenda Leong: Thank you very much. Great to be here.

Daniel Smith:  It’s great to have you and I look forward to learning more about what is going on in the regulation of AI. But first, can you tell us more about yourself and your work at Luminos.Law?

Brenda Leong: Yes, I’d be happy to. Thanks. Luminos.Law was established about four years ago. It used to be called BNH.AI. We changed our name, no other changes involved. As a partnership between computer scientists and lawyers specifically to address AI, products, feature, services as they were being developed and used in the industry today. So the idea is to sort of bridge the gap between the governance policy side of a company and the technology engineering programming design teams that are either building or procuring models that the companies are going to be using.

And the reason for that is, I think probably fairly obvious because of the challenges around figuring out how to take into context the compliance requirements, the governance requirements of these really complex models as they are being implemented into whatever particular company’s products and services. And just to sort of emphasize that a little bit, our clients are doing things with AI today.

So this is not just about generative AI, which I know we’ll talk more about soon, but this is what we’ll call traditional machine learning, predictive machine learning that’s been around for a long time, but most specifically for about the last 10 years or so, since we’ve had big data availability and advanced processing capability. And these are companies using these systems now in real time.

And I’m just emphasizing that for the idea that this is not just futuristic or coming, our goal is to help companies do this responsibly now, because even though people sort of talk about a Wild West of AI or there’s no AI law or things like that, in fact, all of the laws still apply. And so companies that are working, for example, in a regulated industry like finance or healthcare or something like that, already have a very significant compliance burden.

And just because they’re using AI systems now to do those tasks doesn’t mean that they are not still required to demonstrate their compliance with existing regulations. And so that’s been the challenge in the last several years, which is just now getting more and more attention and very rightly so, I think from the president, from congress, from regulatory authorities around the world. But the companies have been aware of these challenges for a while and seeking responsible ways to do that.

Daniel Smith: You mentioned a few terms. You mentioned AI, machine learning and generative AI. So before we get further into the regulation of AI and talking more about the recent developments that are going on, can you briefly define those terms?

Brenda Leong: Sure. So artificial intelligence is obviously an area of computer and scientific development that’s been around for a long time going on close to a century, really now. 70, 80 years for sure. And it basically doesn’t have one consensus agreed on definition, but in general, it sort of means anything that would take active human intelligence to do otherwise. So it can cover a lot of ground.

And in the past, many things that have been considered AI, if we look back at them now, might not feel like AI because they feel like basic or minimal. But in fact, they were pretty progressive advances for computers to be able to do certain kinds of functions. And so there are a lot of different kinds of AI. And most recently, as I mentioned, is machine learning, which is programs that can essentially edit and adapt themselves based on their performance with training data, testing data, and then in the real world deployed for live data.

And these systems are different from past computer programs in the sense that instead of a rules-based set of steps where you do the same set of rules over and over to get the outputs or following a recipe, the machine learning models can adapt and change based on their success at meeting. And so they are in fact not churning out the rules for something, but they are churning out predictions, scores, rankings, percentage, likelihoods basically of something matching or being categorized in a certain way, meeting a defined set of goals.

Like a cat is a cat, a picture of a cat is a picture of a cat kind of example that we hear a lot. So this is a new and developing kind of computer technology. And because it’s changing as it operates, it makes it more challenging to understand, to replicate, to see down into the details. That’s why people use terms like black box and things like that, even though there are ways to test and evaluate these systems. So that’s machine learning.

And then within that, we most recently have the bright shiny toy in the room of generative AI, which generates new content on its own as opposed to generating predictions or rankings or categorical matching percentages of the data that it’s being fed. It’s actually creating new data. So at this point in time, that’s primarily text, images or code are the three main ways that that’s being used and has its own pros and cons. Can be a lot of fun to play with, but we can talk more about it as we go.

Daniel Smith: I think that that’s a really helpful overview for everybody. So I think arguably generative AI has kind of brought the debate or the initiative to regulate AI to the forefront in the past year or so. So when it comes to regulating AI, I know there’s some new regulatory initiatives in the US such as the Biden Administration’s recent executive order on safe, secure and trustworthy AI. Can you tell us more about this executive order and its current status, where it’s at?

Brenda Leong: So I do want to emphasize that a lot of the initiatives we’ve seen over the last few years, including the executive order, are not just about generative AI. They are in fact about AI writ large. So the EU AI Act came out as a draft a couple years ago and has been updated to more explicitly address generative AI. The NIST AI Risk Management Framework, which was published about a year ago, was in design for a couple years and addresses foundation models or generative models.

And the executive order also includes AI writ large or machine learning writ large as well as generative AI. So we definitely need this sort of guidance and regulation for the whole gamut of these kinds of systems because they impact us in many very important ways, but. And I think we’re finally seeing sort of the regulatory world catch up with that with some of the examples that I just gave over the last couple of years, and most recently, the executive order, which particularly focuses on creating a risk-based or adopting a risk-based approach to how to identify the systems that most need oversight and control of some kind.

And then delineating who is responsible for developing some of the standards and controls that we think we’re going to need. So it involves assigning responsibilities to a number of different federal agencies, Homeland Security, Department of Justice, and others. Most particularly the Department of Commerce and NIST are singled out for a lot of the very detailed guidance development. NIST is not a regulatory agency. It’s a standard setting body.

So this will not be regulatory controls. They would be implemented as regulatory mandates by some other agency or by Congress potentially or by other aspects of the executive order. But they’re assigned the task of developing guidelines and best practices, working with industry to sort of get consensus on those kinds of approaches to develop benchmarks for how to evaluate an audit an AI system, which is to say, assess whether it’s operating properly and fairly across whatever particular measure might be appropriate, to develop other kinds of guidelines for systems that might impact national security.

And then of course, as we’ve talked about with generative AI, to develop ways of testing and evaluating generative AI systems because they operate and need to be assessed slightly differently than machine learning systems as a whole. And that includes a lot of call for things like red teaming, which is testing the filters that they’re built with to make sure that they hold and that they can’t be used hopefully to generate bad information or harmful information or spread misinformation or things like that.

So a lot of assignment of responsibilities, a lot of timelines for all different aspects of this. But in just reflecting, I think that it’s a very thorough and attempting to be comprehensive approach to we’re recognizing that these things are challenges and we need to really get moving on how to figure out safe ways for those systems that particularly impact us in very high risk ways.

So again, regulated areas already such as finance and healthcare and energy and communications, as well as national security and things like that. So a lot to be done based on it, but it certainly lays out a path for that and calls on Congress, of course, to do their part as one of the actors here, and then a lot of federal agency involvement as well.

Daniel Smith: Now you mentioned the EU AI Act, which reached a provisional agreement in the European Union last week. I know this act also takes a risk-based approach to the regulation of AI. So can you tell us more about the act and when it may be finalized?

Brenda Leong: Yeah. So obviously the EU AI Act was initially proposed a couple years ago, so has really been leading the conversation on this even though it’s just now coming into its final form. And we don’t actually know the final language that was agreed upon in this most recent round, but we do know that they have reached agreement in principle and in some of the press release and clips that people have released can tell a lot of the key points such as scope that’s being created by what the definitions are of what kind of systems are going to be addressed as high risk.

And they’ve created this sort of label of high risk systems with systemic impact, which is things like energy grids and other very impactful levels of programming or models and systems that are integrated into very systemic operations. So we can tell that they’ve put a lot of time into the nuance and composition of what’s going to be high risk and what protections are going to go into place for each of those, which I think sets a great example for how legislation is going to occur elsewhere, potentially, whether it’s in further carrying out the directives under the executive order or in other countries.

They do take a sort of directive approach in the sense of the EU has clearly identified certain categories of systems which will be banned, which are considered so risky that they cannot be done at all. And there may be some very minor exceptions to some of those bans for things like law enforcement. They’ve also created the high risk tiers for where the greatest layer of protections are going to be applied.

And then most importantly is that over the last few months since the summer parliamentarian version and then the agreement just reached, they have intentionally incorporated protections around foundation models, which would be the generative AI kind of models like the GPT system that powers ChatGPT and the other versions and competitors for that.

So those were not really contemplated in the original EU Act that was initially drafted in 2021, I think, but clearly are a high focus now and a very great importance to the regulators in terms of how those are going to be addressed because they obviously have to be tested in their initial development and design and approach. But then also the point is that they can be incorporated and used into many different larger systems in many different ways across many different contexts.

And so how to put appropriate controls and where to put the burden for those controls, whether it’s on the initial developer of, in this case, large language models or whether it’s on the operator who’s using a system that it’s incorporated into. And clearly there has to be a share of that for both in most cases. And also directing clear communication of risk and details about the operational performance of the system from the developer to an enterprise customer who’s going to incorporate it and operate it.

So all of those are things that the EU is taking, I think a very thoughtful and holistic approach to. People of course can disagree about the particular decisions they’ve made over which systems fall into which categories. And again, we don’t have the actual completely final language available, but I believe that will be coming out within the next few weeks. And then there are still stages that I believe have to pass for final passage of the act and then a certain amount of time for different parts of it before it goes into effect.

I think some of the bans and prohibitions might go into effect as soon as six months or so. And then the act in its entirety, I believe goes into effect within two years. So lots of steps still to take, lots of interpretation and understanding of whatever language is going to come out for us to consider, but again, I think setting a really good example of a thoughtful approach to how to legislate protections in using these different systems.

Daniel Smith: So I want to get back to the executive order in the EU AI Act and kind of talk through some of the key takeaways. But still, there’s some other recent developments that are shaping how people think about AI ethics and regulation such as the Bletchley Declaration on AI safety and the G7 guiding principles. So can you quickly just talk through how developments like these affect the regulation of AI or could affect the regulation of AI?

Brenda Leong: Yeah, I think that those two examples in particular reflect the sort of global nature of the attention and concern around figuring out how to regulate and control AI. So we’re focused on what’s going to happen in the US and we are following what’s going to happen in the EU, but this is of equal concern and intention elsewhere in the world as well.

And so the countries that participated or signed on to the Bletchley Declaration and that participated in the G7 meetings or have come out in support of the principles from codes there, again, are sort of reflecting where are we finding common ground globally and how protections around these systems are going to be operationalized. So they all sort of espouse the same big picture, principles and ideals, human rights, transparency, explainability, fairness, accountability, ethics, bias mitigation, privacy, all the same ideas.

Obviously where it gets tricky is when you start to have to narrow that down to what does that mean? What kind of accountability? What kind of testing is done? Who performs it? Who reviews it? Where is it reported? Who says what standards it has to meet? That gets very, very tricky. But all of these are taking the same approach that those are the kind of controls we need, both the Bletchley and the G7 also took similarly a risk-based approach.

What are the high risk systems. The G7 guiding principles, the first principle is doing risk assessments over the life cycle of the model of the system. So that’s sort of the foundational step. And then starting to identify harms and how to account for model operations once it’s in play. So they’re all kind of taking that same high-level approach.

And like I said, it’s just going to come down to what the EU AI Act says. It’s different from what standards and controls the Department of Commerce might come out with in the US. This might be different from Singapore, which has a very advanced AI guiding principles or guiding practices and protections, as well as other areas in the world. India, Brazil, China, Japan are all developing these same kinds of controls and standards.

Daniel Smith: Before we hear more from Brenda, I want to quickly tell you about CITI Program’s Data Management for Social Behavioral and Educational Research course. This course has been designed to help researchers understand the basics of planning for and managing data generated through social, behavioral and educational research involving human subjects. You can learn more about this course and others at CITIprogram.org.

And now back to the conversation with Brenda. I know you just touched on the spirit of all of these different initiatives, but can you talk a bit more about what the regulation of AI might look like in the near future? For example, you mentioned earlier the term red teaming. Things like that, do you think they’ll become more commonplace globally for people working on AI systems?

Brenda Leong: Yeah, I do think so. I hope so. I don’t know that the standards will be the same everywhere, probably will not be the same everywhere, but hopefully with enough in common to make them usable. So red teaming uses a term that grows out of a history of use in the security world, and before that, maybe even in the national security realm with concerns back in the Cold War and things like that.

But it’s been used for a long time in the security context about challenging the protections on a network, network security controls, and how secure are they and how easy are the access points to break or to compromise the protections that are in place. So the term is mostly applied in the current context around generative AI. And the idea is these generative AI systems are trained on, of course, mammoth amounts of basically the internet, for lack of a more precise way to say it.

And so just almost an infinite amount of information that might have been collected, or at least an infinite variety of information that might’ve gone into the training sets for some of these. And a lot of that is things we don’t want perpetuated or we don’t want easily available. And so let’s just use ChatGPT as the example because that’s the name that most people know, but this is not in any way reflective of them, particularly as opposed to others.

But for example, if ChatGPT was a program that was trained on the internet and then just sort of opened up to people to use, you would be able to ask it and generate and get some very objectionable things, a lot of wrong information, which you can get anyway, but we’ll get to that in a minute. But it would just sort of be a Wild West in a real sense.

And so there are a lot of controls and filters and rules that are applied around it before it’s open up even in a public facing version. And then more controls and filters can be put on for the enterprise version that companies may use themselves.

And the idea of that is to provide protection of, for example, teenagers trying to access how to commit suicide or for someone to try to figure out how to create chemical weapons or for someone who just wants to generate really offensive content. And so there’s just some sort of content that we don’t want to have it contribute to or lead people into. And so red teaming is trying to test those limits.

It’s also just trying to test and see that it’s working generally, but at the most foundational level or at the core, it’s trying to make sure that those protections are holding. Can I get it to generate bad outputs, bad profanity filled things in a letter that I’m going to write to my boss? Or can I get it to generate misleading information about scientific theories or findings about various things like vaccines or essential oils or things that might be controversial under some various interpretations?

And so red teaming is testing all of that. There are a lot of techniques for it to sort of trick as it were, the program. And of course, we’ve seen the headlines of if you spend long enough in an iterative cycle with it, I won’t really call it a conversation, but it’s an iterative set of prompts and responses, and it ends up that the program tries to convince the reporter to leave his wife and marry the program. So you get some kind of crazy stuff. And red teaming is a way of approaching that.

But there are a lot of other ways to monitor these systems as well. Audits are one of the key points that are coming up in a lot of proposed regulation and legislation. New York City put in the first audit requirement law into place this last year that went into effect over the summer for companies that use AI in an employment context for anyone within the jurisdiction of New York City. Had to do an audit to demonstrate that it was not unfairly biased along protected classes of race and gender.

And so that was the first definitely, I’m sure not the last, of a very particularized way of who is going to do the audit, what’s it going to look like, what’s the output going to be and how is it going to be measured. And so I think it was a learning experience with a lot of information that has been done on things like that.

My firm has been doing audits for several years now to provide statistical analysis and measures of audits of model performance. And so that I think is the other main tool in addition to red teaming that we’re going to see expensively in coming guidance on trying to create some standards and commonality across what audits will look like or how they will be performed.

Daniel Smith: With those things in mind, red teaming and audits, do you have any advice for organizations when it comes to preparing for this?

Brenda Leong: So this is a really fast moving place both technologically and in the marketplace and in all kinds of ways. But yes, there are resources out there for companies. Obviously the first thing to do is to give the time and attention within the company that this is something that needs resources put against it and some kind of evaluation. So an AI inventory. Where and how are we using AI in our business?

Are we using it to run our HR systems for hiring and promotions? Are we using it as part of our actual products and services out to our customers? Are we using it in some other form, maybe in our payment systems and applications? Are we using it with our enterprise business partners or customers in some sort of way that we might have contractual agreements around? So how are we using AI?

And then documenting that and then figuring out which of those particular models are high risk. So there’s a risk assessment step that can be done, and figuring out which ones of those might carry the most risk, identify what the potential harms are, figure out how to measure or identify and categorize those harms, and then figure out what are the things that we can do internally to address those either within say our procurement process or in our customer feedback forms, or in our general oversight and internal auditing or internal measurement processes.

And so to do all those things relative to an AI system, there are some tools. There’s the NIST AI Risk Management Framework, which is a very excellent, very detailed guide to how to build that sort of oversight. It isn’t an oversight itself because it’s meant to be industry and business agnostic, but it’s the guide to how to do that. And the NIST has published a very detailed playbook that goes along with it that provides a lot of very granular examples and questions and context for how to govern this internally to a company.

There are businesses, our firm does this. There are consultants, big consulting companies who provide this kind of guidance and services. There are very small and particularized companies who have started up providing AI dashboard sort of services to help a company identify and track their use of their AI models internally that are growing. So there are a lot of different sources of help in this regard.

But obviously the first and foremost important thing is that the company decide that this is going to be a priority and put the time and resources, put somebody in charge of it and put some attention to this so that they can develop some sort of governance oversight, some sort of documentation trail, determine what they’re going to measure and how they’re going to measure to whatever particular standard, and then actually carry that out and be able to do that.

So we know that the FTC has already demonstrated interest in being able to see that sort of record reflected in the financial services industry and banking industry. They have experience with this from the last 10 years or so of something called model risk management, which is not specific to AI, but which is about management of detailed and advanced statistical analysis.

And they are building that out now to encompass their efforts around AI because they sort of already have a lot of that governance infrastructure, risk assessment infrastructure in place. And so they’re able to make that additional step pretty easily. So those are some of the things generally, and then obviously every company or industry is going to have to make it work for them and the way that applies based on either what regulatory restrictions they’re under, what jurisdictions they have to comply with, and just their own internal values and priorities for the services.

Daniel Smith: That’s all really helpful. And I’ll include links to some of those resources that you mentioned in our show notes so that our listeners can learn more. On that note, do you have any final thoughts that you’d like to share that we did not touch on today?

Brenda Leong: I guess I think I’d maybe just like to re-emphasize that these technologies hold a lot of promise, a lot of efficiencies, a lot of opportunity. Some of them are new and faster and better in many cases, ways of doing things that we already did. And in those cases, we may already have a level of oversight and regulation that’s if not sufficient, at least usable to get us moving in the right direction. But then of course, they’re also creating opportunities to do new things, to do new features and functions that we couldn’t do before.

And so being aware of the risks that are coming along with that, both individual risks of the technology as it’s being used in the moment to the person, but also sort of the bigger scale risks. What are we doing? What decisions are we sort of making socially? And this goes to things like the recent writer strike and screen actor skilled strike where they won concessions about how AI was going to be limited and how it could be used in future entertainment applications.

And that’s not a technology-generated decision. That’s a social and political and employment-based or labor market decision. And so I think we have to keep in mind that we have these bigger picture things. AI has a very significant impact on the environment. And so we need to keep environmental impacts in view as we make decisions about what we’re going to automate using some of these systems.

That’s one of the costs, that’s one of the impacts, that’s one of the potential harms. So that needs to be factored in to some of the risk analysis. And then we also just need to make sure that we’re sort of generating a general educational understanding in people who are using these to at least maybe have, know what questions to ask. Kind of like people had to learn way back in the early days of the internet and email what spam email was and what phishing was and how not to trust potential Nigerian princes and things like that.

And then also over the last decade plus, people have had to learn what data privacy is and that maybe they need to be very careful about what information about themselves they’re willing to exchange in return for services on the internet or how they might want to ask questions about the organization that they’re giving it to. What protections does it have in place? What commitments does it make to use their data responsibly? Now, they also need to learn about the risk of some of these systems.

So for example, generative AI is known to create what are sort of colloquially called hallucinations, but in fact, it’s just errors. It’s just bad information as part of its output. It will just make up things that are not real or are not true and include that in a very confident and real and smooth sounding answer. And it’s very difficult for people, especially if they’re interacting with it about things that they don’t already know a lot about, to identify that it. It’s basically impossible to identify that without doing additional research.

So we see it being challenging for lawyers in my own profession who have gotten in trouble for using it and had it make up cases, cite to cases that don’t exist. We see it impacting our educational system where students may use it in various ways. And there are probably good ways and bad ways to use it. I don’t think just prohibitively banning it entirely is realistic, but there certainly have to be some controls around it.

And we see it being used in many other contexts as well, and people understanding the limitations and being able to sort of critically and thoughtfully challenge the answers to make sure that they’re using it in ways that benefit them and not sort of getting suckered so to speak, is going to be a big learning curve I think.

We’re going to see probably, unfortunately, more and more headlines of bad examples of that in ways like the asking the reporter to marry it, but maybe less humorous and more fundamentally problematic outputs that we’ll see. So it’s a challenge. There’s a lot of great systems and great features that are coming along with these technologies, but we all have to sort of keep our wits about us as we start to use them.

Daniel Smith: Certainly. And I think all those points really underscore the need for these forthcoming regulations and also just people becoming more familiar with these tools and educating themselves on how they work and some of their flaws. So I think that is a wonderful place to leave our conversation for today. So thank you again, Brenda.

Brenda Leong: Thanks very much for having me. I appreciate the chance to talk about it.

Daniel Smith: And I also invite everyone to visit citiprogram.org to learn more about our courses and webinars on research, ethics and compliance. You may be interested in our Essentials of Responsible AI course, which covers the principles, governance approaches, practices, and tools for responsible AI development and use. And with that, I look forward to bringing you all more conversations on all things tech ethics.

 


How to Listen and Subscribe to the Podcast

You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guest

content contributor brenda leong

Brenda Leong, JD, CIPP/US – Luminos.Law

Brenda Leong is a Partner at Luminos.Law and an adjunct faculty member teaching privacy and information security at George Mason University.


Meet the Host

Team Member Daniel Smith

Daniel Smith, Associate Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program

As Associate Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.