Back To Blog

On Tech Ethics Podcast – Understanding the Research Security Training Requirements

Season 1 – Episode 20 – Understanding the Research Security Training Requirements

Examines the requirements and various considerations for research security training.

 


Episode Transcript

Click to expand/collapse

 

Daniel Smith: Welcome to On Tech Ethics with CITI Program. Today we are going to discuss the requirements and various considerations for research security training. You’ll first hear from Mike Steele, who is an expert in the office of the chief of research security strategy and policy at the National Science Foundation. Mike will share some information on the federal research security training requirements and the National Science Foundation’s Research Security Training Modules. Then you’ll hear from Emily Bradford, who is the assistant director of research compliance at the University of Kentucky, and a colleague of mine at CITI Program who has worked on the development of our research security training content. Emily will discuss CITI Program’s research security training options, which include the National Science Foundation’s modules and two other courses that are intended to help people meet the federal research security training requirements.

Before we get started, I want to quickly note that this podcast is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws and regulations that may be discussed in this podcast. In addition, views expressed in this podcast are solely those of our guests. On that note, let’s hear from Mike about the federal research security training requirements and the National Science Foundation’s research security training modules. Welcome to the podcast, Mike.

Mike Steele: Thanks so much for having me.

Daniel Smith: I look forward to learning more about the research security training requirements. But first, can you tell us more about yourself and your work at the National Science Foundation?

Mike Steele: As you mentioned, I’ve been a part of NSF since 2018. I started as a rotating program officer in the division of Research on Learning in Formal and Informal Settings, the full name of the division, which is situated in the directorate of STEM education. My background is actually as a mathematics teacher educator originally. In 2022, in part because of this project related to the research security training modules and my expertise in pedagogy, I came into the Office of the Chief of Research Security Strategy and Policy, which was relatively new at the time.

I’d been helping out with some of the agency’s international outreach on research integrity and research security through a group called the Global Research Council, which is like I’ve had a colleague refer to it as science United Nations. It’s heads of research funding agencies from all over the world that get together and talk about global issues. My background in pedagogy made me a really good fit for helping out with the new Research Security Office and in the creation of the training modules that we’re going to talk about today.

Daniel Smith: Now, before we get into the specifics of those modules, can you briefly describe the importance of research security and the related requirements in general?

Mike Steele: This is a really important question, and if you’d asked me a handful of years ago about my thoughts about the importance of research security as someone whose area of research is teacher learning, I would’ve thought research security had nothing to do with me. Clearly, research security relates to people who are working in high risk technological fields, and that’s not what I do, so I don’t need to worry about that. Well, in an increasingly changing and more global world, understanding research security is important for everybody who’s involved in the US research enterprise. Just like we have conversations with colleagues, and collaborators, and grad students, and other trainees about responsible conductive research, we also need to promote a culture in which research security is a regular topic of conversation as we do our work.

We should be asking and answering questions like who has access to our data and our systems? How are we promoting international collaboration? Then this next question is one that I don’t think would’ve ever occurred to me, but is increasingly important. When we travel to disseminate our findings, what data are we sharing and with whom? Who’s in the room? Who has access to those data? This idea of research security is really about how do we safeguard the research ecosystem against the misappropriation of research and development to the detriment of national or economic security or protecting it against related violations of research integrity and foreign government interference. What it really comes down to is it’s making sure that researchers have protection and that the intellectual ideas, the results that they’re producing are used in ways that have integrity that they’re proud of and are not a misappropriation or misinterpretation of the research, and really that gets at the underlying issue of public trust in science.

Daniel Smith: You alluded to how these requirements have evolved a bit over the years, and I recognize that there may be other folks out there who currently feel unsure about how these requirements apply to them. I want to talk a bit more about who these requirements apply to. Can you talk about who needs to adhere to the federal research security requirements?

Mike Steele: The requirements that were codified in the CHIPS and Science Act include PIs, Co-PIs, senior personnel that are applying for and receiving federal funding. The requirements that were written into that law that follow National Security Presidential Memo-33, which was enacted in January of 2021, that memo identified research security training as one of four elements of a research security program that was required by institutions that were meeting a particular threshold of federal funding. The CHIPS and Science Act then adds onto that and says, “Just like responsible conduct of research, this is training that we want everybody who’s involved in the research ecosystem and receiving federal funding to have.” That includes folks at universities, it includes folks at national labs. For me, if I’ve got graduate students who are working on federally funded projects that have got undergrads that are working on federally funded projects, they’re subject to these requirements as well.

Daniel Smith: That’s really helpful context. Since research security is a new and evolving area, what are the specific training requirements for those individuals?

Mike Steele: Institutions have the flexibility to determine what counts as adequate training and research security for their own researchers, just like we do with responsible conductive research. We’ve got some institutions that have homegrown programs that they engage in, some that are online, some that are a mix of online and in-person workshops. Research security is intended to be much the same, like responsible and ethical conductive research, some institutions actually already have research security trainings for their affiliated researchers, and that’s great. Others, and I’d say many others, do not. We commissioned these modules that we’re talking about today to support institutions in having access to high quality research security training.

Daniel Smith: I know a few groups were involved in the process of developing those modules. Can you share some background on the process and the different groups who were involved?

Mike Steele: This was a really unique program for the National Science Foundation to administer. Those listeners who were familiar with NSF know that NSF puts out solicitations for proposals. It’s a field-driven enterprise. We receive proposals, they go through a merit review process, and we award grants for people to do the work. It may surprise some people to learn that the research security training modules actually went very much the same way. There was a solicitation that was developed. There were proposals that were received in the four areas that were described in the legislation. Each of the four areas, you could submit a proposal for one of them, you could submit a proposal for all of them. All of those proposals went through a merit review process. They were evaluated on their intellectual merit and their broader impact. We made four awards to four different teams who responded to that solicitation and went through the merit review process.

While NSF led the work, it was co-funded by a host of U.S. government agencies, including the National Institutes of Health, the Department of Energy, and the Department of Defense. The four awarded groups, and I’ll name them in a moment, were also advised by a content expert group with representatives from all of those agencies, plus individuals from FBI and the National Counterintelligence Task Force. There was really a powerhouse of research security expertise from the U.S. government that was brought to bear and made available to these four teams. In addition to that dream team of government experts, the four teams that we awarded had serious content expertise in their own right.

The first module that focuses on what is research security and why is it important was developed at the University of Alabama Huntsville, the lead PI there was Dr. Tommy Morris, who in particular is an expert on cybersecurity. The second focused on disclosure. That one was led by Dr. Kevin Gamache at Texas A&M University. Texas A&M has a really strong reputation in the field as a university system that’s been incredibly thoughtful and proactive about thinking about research security. Kevin brings both an academic perspective and actually has been an officer in the military prior to arriving at the university. He’s got a wide range of expertise.

The third module was focused on managing and mitigating risk. That was a broad consortium of universities. It was led by Dr. Lisa Nichols, who’s currently at the University of Michigan. But we had a wide variety of institutions that were involved in that one, including University of Pittsburgh, Penn, and a handful of others as well. The fourth module was interesting in that was a different sort of organization called Associated Universities Incorporated, which is an organization that has strong ties to our national laboratory infrastructure. That brought a really important perspective to the work. That fourth module is on international collaboration that was led by Dr. Kevin Doran. Those were the four teams that over the course of about 12 months, developed, tested, and ultimately created these modules that are available to the community.

Daniel Smith: That’s great to hear about all of the different subject matter experts who are involved in the development of these modules. Going back to what you were talking about previously regarding the flexibilities that the federal government provides for research security training, do learners need to complete all four of the NSF modules to fulfill the research security training requirements? Or is it up to institutions to determine how they utilize them?

Mike Steele: It is absolutely up to institutions. We recommend that the set of four modules are the most comprehensive and thorough way to meet the requirement. That’s really what we put forward is the gold standard. But certainly institutions can make determinations about what’s needed. They may even differentiate across different researchers. For example, the module on international collaboration may not be necessary for people who aren’t actively engaged in international collaboration. An institution may say, “Okay, if you’re doing that sort of work, make sure you hit module four. If not, just the other three will be fine.”

I will also mention that the content is differentiated in different spots for researchers. There’s special content available for graduate students and undergraduate students, but also for sponsored program offices and vice provosts for research and administrative officials to be thinking about at that higher level, those sorts of things. I think everybody in a university sponsored program office or a vice provost for research office should be exposed to all four of these. In terms of what you do for your rank and file researchers, that’s an institutional decision.

Daniel Smith: Then on top of that, how often and when do personnel need to complete research security training?

Mike Steele: CHIPS and Science doesn’t specify the interval for retraining, but we’re recommending revisiting the trainings each year like we do with most of our other refresher trainings. Again, if institutions have their own training modules or want to augment those in any way or want to interleave those with these modules at different frequencies, that’s something that they’re free to do as well.

Daniel Smith: Now, in terms of access, can you talk a bit about the different options for where people can take these modules?

Mike Steele: We are really committed to making these freely available to the community. You can go and find everything you need for the modules at RSTforresearchsecuritytraining.nsf.gov. There you can find the four modules in a form that you can take interactively online right within your browser and receive a PDF certificate of completion at the end. But what we think most institutions will want to do is to integrate them into their existing training courses through a learning management system or an LMS. This would allow university and lab personnel to authenticate using their local credentials so that institutions can track the completion in a way that works for them.

For that purpose, on that same page, there are common file packages available. They’re called SCORM files that download and integrate within institutions’ LMS ecosystems. Institutions can even customize the modules to add links to local policies, either the state level or their university system or other local content that’s related to research security. We have contracted with a tech team. Listeners, if you’re in there trying to figure out how to do that work and you run into any roadblocks, please send a note to the system noted on the page and they’ll get right back in touch with you and support you in integrating that work into your systems.

Daniel Smith: Wonderful. We’ve gone over a lot of really helpful information, but I’m sure listeners may have additional questions. Are there any additional resources where people can learn more about research security and the modules that we discussed today?

Mike Steele: Yes, and if you go to those module pages, rst.nsf.gov, there’s a link there to NSF’s main research security page, which has a wealth of resources to support this idea of building a culture of research security.

Daniel Smith: I’ll be sure to include a link to that website in our show notes. Before we turn it over to Emily, do you have any final thoughts you would like to share that we have not already touched on?

Mike Steele: As I mentioned, when we started out early in my career, I thought research security didn’t have anything to do with me. I understand now as I’ve learned more about the system, as I’ve read some of these reports that have been produced, that the dynamic ecosystem that our universities and labs exist in is constantly changing. For example, in my capacity as a university professor, I receive emails all the time from students who are interested in coming to work with me from overseas. They often talk about that they’re fully funded, they just need space and some of my time to collaborate. I’ve been receiving these emails for years, almost since I’ve been a university professor.

In the past, I may have just said yes to a collaboration if I had the time and space to do so. I would’ve provided the student with a workspace. I would’ve helped them get credentials and access to our university systems. In most case, they were there to do the work as advertised. But there were questions I wasn’t asking. Who’s funding them? Were there any requirements that they share the research data that they’ve gained from my team back to the funding program or government that sent them to me? How, if at all, did we vet their access to university systems? I didn’t think about those questions. But now that we’ve got documented cases of data theft from these vectors, I think very differently about the work. We still want to promote collaborations and particularly international collaborations. That’s how the important science that’s going to change the world gets done. But just as we’ve built cultures of responsible conductive research, we’ve built cultures of non-discrimination over the years. We also need to build a culture of attending to research security and research integrity. That’s what these modules really aim to do.

Daniel Smith: Thank you, Mike. I greatly appreciate you coming on the podcast to talk about this important topic.

Mike Steele: Thanks. It’s been a pleasure and I look forward to hearing from the community how these modules are serving them.

Daniel Smith: Now that we have a better understanding of research security, the associated training requirements and NSF’s research security training modules, let’s hear from Emily regarding the CITI Program’s training options and some further considerations for how you can comply with these training requirements. Welcome to the podcast, Emily.

Emily Bradford: Thanks for having me.

Daniel Smith: Before we talk more about research security training, can you tell us more about yourself and your work at the University of Kentucky and CITI Program?

Emily Bradford: I got into research compliance about five years ago, starting with clinical trials compliance, and then I moved into conflicts of interest. More recently, I acquired some responsibility for research security and export controls. As part of that, I’m working with a really great group of people who are trying to build a research security program that’s required by NSPM-33, and the CHIPS and Science Act. In my copious free time, I’m a subject matter expert for CITI, which means that as CITI builds their training modules related to conflicts of interest and research security, I’m in the background fact checking. I’ll say that’s a really curious position to be in right now, because the federal regulations for both COI and research security are still being developed. We don’t know what the final versions of any of that will look like yet, and yet we still have to implement this training.

Daniel Smith: We just heard from Mike Steele at the National Science Foundation regarding the research security training requirements and NSF’s research security modules, which are available via CITI Program and elsewhere. But I want to hear more about CITI Program’s Research Security Advanced Refresher course and Undue Foreign Influence course. First, can you tell us more about how the NSF’s research security training modules and CITI Programs courses complement one another?

Emily Bradford: Sure. I like to think of these courses in the order in which they were developed. CITI’s Undue Foreign Influence course came first, and it really came out at the beginning of all of the concerns about foreign interference, and it preceded the release of NSPM-33 and the CHIPS and Science Act, which of course are what’s driving the current research security training requirements. It’s a really good foundational course, I think, for anyone who’s engaging in the international research space. Now, following the release of NSPM-33 and CHIPS, both the NSF and CITI began developing their own training. Just as we heard from Mike, the NSF developed four modules including an introduction to research security, the importance of disclosure, risk mitigation and management, and the importance of international collaboration. All of those modules are easily digestible and are a pretty high-level overview of the topics.

CITI’s approach was a little bit different, and they created eight modules that followed point by point the recommendations from the federal guidance, and they go into more depth and more detail on each of the topics. It’s really convenient that CITI is offering the NSF modules as a basic course and then offers their own content as the advanced refresher modules. Because keep in mind that this training needs to be done annually, which means that this configuration really allows covered individuals to see new in-depth training each year if it’s used as a refresher. For people who are already using CITI or might be considering it, this provides a convenient way of being able to assign specific courses and also to certify completion.

Daniel Smith: Can you tell us more about the topics covered in the Research Security Advance Refresher modules? I know you mentioned that they take a deeper dive into those eight areas, but can you share a little bit about what those areas are?

Emily Bradford: CITI modeled their research security course off of the specific risk areas called out in CHIPS, and those are cybersecurity, international collaboration, international travel, foreign interference, proper use of funds and foreign gifts and contracts and disclosure and transparency. I’ll give you some examples. The cybersecurity module, for example, those learners will be exposed to the concept of controlled unclassified information, and they’ll also get a crash course on the Cybersecurity Maturity Model Certification, or CMMC, requirements that are coming down the pike. Additionally, the foreign interference module goes into some depth on the differences between foreign entities of concern versus countries of concern and give some practical steps for how institutions can do some of that screening on their own.

Daniel Smith: Two, talk some more about the general recommendations. I know you already talked about the sequencing of the courses that make sense, but can you talk some more about the recommendations behind whether learners should complete all of the modules, or is it really up to institutions to determine which modules they require learners to complete for CITI’s course, as Mike talked earlier about the flexibilities that NSF provides with their training requirements?

Emily Bradford: Right now the requirements are a little bit ambiguous. Institutions have quite a bit of flexibility at this point to determine what needs to be covered under a research security course, how to deliver that and who needs to take it. The CHIPS and Science Act is pretty clear about what topics need to be covered. But again, there hasn’t been a lot of guidance beyond that about whether all of those topics need to be covered at once, or you can take one topic each year for eight years, that really just hasn’t been articulated clearly yet.

Daniel Smith: Then to talk a bit more about who these courses are geared towards, we talked earlier about how they’re obviously targeted towards covered personnel and researchers and the research administrators as well. But could other people such as students, benefit from taking these research security trainings?

Emily Bradford: As you said, they’re geared towards covered individuals, which are the people who are conducting the research. That’s really anyone who’s contributing in a substantive, meaningful way to any federally funded project. I also think they’re valuable for compliance officers. This topic is my bread and butter, but I learned a lot of new things taking the training courses, and I think that other people in administration would as well. It never hurts to learn more about this topic. Students are another potential group who I think would really benefit from the training. Our trainees are often exposed to light versions of these topics because they’re viewed as not having the ability to independently impact projects or independently share data or form international collaborations on their own. First of all, that’s just not true.

Second of all, educating students in this space is really how we set the culture for the next generation of scientists and academic leaders. I think we all want that culture to be informed, to be inclusive, to advocate for open and transparent research, and here’s the key to do it safely. We need to be teaching our students how to make research thrive in these shifting geopolitical climates with a different set of rules than their mentors grew up with. Their mentors didn’t have to deal with research security and these students will. Training them, I think, is a good idea.

Daniel Smith: From your perspective, what are some of the key considerations that should go into establishing a research security training approach at an institution?

Emily Bradford: Institutions are really going to need to look at their own culture and their own risk assessments to decide what training approaches are right for them. As an institution, you may need to ask if there is an appetite to have your faculty take four to six hours of training. That can be a hard sell. If an institution wants to create their own shorter training modules, which is allowed, they’re going to need to assess whether they can meaningfully cover the required material in a shorter format. Can you teach research security in a one-hour webinar? I don’t know the answer to that, but that speaks to your institutional risk and risk tolerance as well. If an institution is doing classified or let’s say controlled unclassified research routinely, they may feel that advanced training is more appropriate. If the research is in low-risk areas with minimal possibility of undue foreign influence, an institution may be able to meet the federal training requirements at a much lighter level.

If it’s going to be lighter, maybe they can package research security training with RCR or with conflict of interest training or any of the other mandatory training that their researchers already have to do. Another consideration that institutions need to grapple with is whether to include institution-specific material in their training. For example, federal guidance says that we need to have training on foreign travel security. Does that training need to be tailored to your own institutional policies and processes? Does your institution have a process for travel authorizations and prior approvals? Do they require the use of loaner laptops when you travel? Do they have a screening program if you’re visiting a foreign country of concern? All of those are institution-specific factors. How do you educate your faculty about that?

Daniel Smith: I think those are very helpful considerations. Building off of those a little bit, given your experience, do you have any advice for others who are navigating these federal research security requirements?

Emily Bradford: The best advice I can offer is to network honestly and to find out what other institutions are doing and why. There are universities that are well ahead of the curve on this and have been offering training and running a functional research security program for a couple of years now. Nobody should have to reinvent the wheel. But what you do need to do is take some time to think about your institutional culture, conduct a risk analysis, consider who you have at your institution that needs to be involved in this, and then start building your processes and your policy off of that. The goal of all of these initiatives, including the training, is to advance your institution’s research as safely as possible in the midst of shifting risks. But those risks are very much specific to each institution, to each project, and to each covered individual. There’s not going to be a one-size-fits-all approach to any of it. It really just makes sense to look at what other people have done and figure out what you need to take and what you need to discard and how to make it work for you.

Daniel Smith: In addition to the ongoing collaboration, are you aware of any additional resources that may be helpful to people?

Emily Bradford: I think there are a lot of resources available to people right now who are working in the research security space. The best information I’ve seen coming out is from the JASON reports, they’re an independent science advisory group, and then from the Council on Governmental Relations COGR, which really does a good job of disseminating information to universities as recommendations and deadlines change. Because remember, this is still a shifting landscape. We don’t know the final versions of the regulations and we don’t have final deadlines. The NSF is also taking the lead on a lot of the initiatives and has a good website, and they’re really good about communicating expectations. My advice is to really keep an eye on the NSF, which I should also say they disseminate the JASON reports as well, NSF and COGR are key here.

Daniel Smith: Wonderful. I’ll be sure to include links to those JASON reports and the NSF website and COGR in our show notes so that our listeners can check those out and continue to monitor them for more information. On that note, do you have any final thoughts you would like to share that we have not already touched on today?

Emily Bradford: At the risk of waxing philosophical here, I think my final thought is just to encourage people to remember that NSPM-33 came out to protect the US R&D efforts, and the CHIPS and Science Act was designed to strengthen science and technology efforts. They’re both designed to advance US research. As we add layers of administration and create research security programs, and increase paperwork, and require ever more training, we need to make sure that those are always in the service of advancing the research and that it’s proportional to the risk, and that we’re not creating so much fear and administrative burden that we actually end up inhibiting research. The goal here is to do research safely, and that needs to be the cornerstone of anything that we implement, including this training.

Daniel Smith: I think that’s a great place to leave our conversation for today. Thank you again, Emily.

Emily Bradford: Thank you.

Daniel Smith: Thank you all for tuning in. Be sure to check out the show notes as I’ve included many resources that include more information on research security and the training options Mike and Emily discussed today. With that, I look forward to bringing you all more conversations on all things tech ethics.

 


How to Listen and Subscribe to the Podcast

You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guests

content contributor mike steele

Mike Steele, EdD – National Science Foundation

Mike Steele is an expert in the Office of the Chief of Research Security Strategy and Policy at the National Science Foundation, where he supports efforts to develop research security training and the Office’s global outreach efforts.

content contributor emily bradford

Emily Bradford, PhD – University of Kentucky

Dr. Bradford is currently the Assistant Director of Research Compliance at the University of Kentucky.


Meet the Host

Team Member Daniel Smith

Daniel Smith, Associate Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program

As Associate Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.