Season 2 – Episode 9 – Practical Uses of AI in Research
This episode dives into the world of AI and its practical applications in the world of research operations.
Podcast Chapters
Click to expand/collapse
To easily navigate through our podcast, simply click on the ☰ icon on the player. This will take you straight to the chapter timestamps, allowing you to jump to specific segments and enjoy the parts you’re most interested in.
- Podcast Introduction (00:00:26) Justin Osborne introduces the podcast and the guest, Charlie Fremont, discussing the episode’s focus on AI in research.
- Charlie’s Journey into Research (00:02:20) Charlie shares his unconventional path into clinical research and his current role at Washington University.
- Defining AI Terms (00:04:14) Charlie explains generative AI and its capabilities compared to other AI types.
- Generative AI vs. Predictive AI (00:06:09) Discussion of the differences between generative AI, which creates new content, and predictive AI, which recognizes patterns.
- AI Adoption in Research (00:07:20) Charlie reflects on the slower adoption of AI in research compared to its integration in healthcare.
- Real-Life Applications of Generative AI (00:08:35) Charlie shares his personal experience using generative AI to enhance productivity and efficiency in his work.
- Generative AI in Research Building (00:09:30) Charlie discusses audience poll results about the willingness to use generative AI in research roles.
- Challenges of AI Adoption (00:09:57) The need for compliance and legal reviews slows down AI implementation in research settings.
- Using Generative AI for Coding (00:10:19) Charlie describes using ChatGPT to assist with coding and building billing calendars.
- Bridging IT and Research (00:12:40) Discussion on how generative AI helps bridge the gap between IT and research staff.
- Everyday Applications of AI (00:14:01) Exploration of how generative AI can improve daily tasks and enhance productivity in research roles.
- Understanding ChatGPT (00:16:02) Charlie explains the functionality of ChatGPT as a mix of a search engine and a knowledge repository.
- Verifying Information from AI (00:16:30) Discussion on the importance of verifying AI-generated information and the concept of “trust but verify.”
- Caution with AI Use (00:19:08) Charlie highlights the need for careful verification of AI outputs, citing a case of misinformation.
- AI’s Cultural Integration (00:20:04) Discussion on how AI will eventually become ingrained in research culture, similar to Google and Wikipedia.
- Limitations of AI in Research (00:21:08) Charlie addresses various limitations and challenges of using AI in research settings.
- The Importance of AI Adoption (21:31) Discussion on the necessity of embracing AI in research to avoid being left behind.
- Risks of Using Generative AI (22:27) Concerns about the compliance and data security risks associated with generative AI tools.
- Data Confidentiality in AI Tools (23:09) Emphasis on the importance of not inputting confidential data into generative AI.
- Institutional Policy and Compliance (24:28) Exploration of the disconnect between corporate compliance policies and ground-level practices.
- Using AI for Secure Data Handling (25:56) Suggestions for using AI tools without compromising sensitive data through mock data.
- Generative AI and Cognitive Dependency (28:26) Discussion on whether generative AI will make users less capable or promote deeper reflection.
- Shifting Roles with AI Integration (30:26) How AI will change job roles rather than eliminate them in the workplace.
- Collaboration Between Experts and AI (32:04) The necessity of expert validation when using AI-generated outputs.
- Finding Balance in AI Capabilities (35:12) Discussion on achieving a balance between leveraging AI and maintaining compliance.
- Deep Work vs. Shallow Work (38:23) Understanding the difference between deep, shallow, and medium work in the context of AI.
- Preventing Burnout with AI Tools (40:25) AI’s potential to help maintain work-life balance and reduce burnout in high-demand jobs.
- The Challenges of Clinical Research (00:41:22) Discussion on burnout and high turnover in clinical research roles, emphasizing the need for support and training.
- AI’s Role in Reducing Workload (00:42:05) Exploration of how AI can streamline tasks and reduce stress for clinical research coordinators.
- Real-Life Applications of AI (00:43:58) Example of using AI to automate data entry tasks, potentially saving significant time.
- Same Team Segment: Connecting with Purpose in Research (00:44:40) Charlie shares experiences illustrating teamwork and dedication during the COVID-19 pandemic and in research challenges.
- Closing Remarks and Podcast Promotion (00:46:53) Thanking the guest and promoting the podcast, encouraging listeners to subscribe and explore more content.
Episode Transcript
Click to expand/collapse
Charlie Fremont: Instead of seeing it as completely replacing people, this fear always comes up with new technology hype cycles. And I was thinking you could kind of see generative AI is not just straight up eliminating people, but giving yourself extra arms like Dr. Ock from Marvel. It’s increasing your reach or your output, your ability to gain a more volume. And you could see that on an institutional level as well, I think.
Justin Osborne: Welcome to On Research with CITI Program, your favorite podcast about the research world, where we dive into different aspects of the industry with top experts in our field. I’m your host, Justin Osborne, and I appreciate you joining. Before we jump in, as a reminder, this podcast is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws and regulations that may be discussed in this podcast. In addition, the views expressed in this podcast are solely those of our guests.
At the beginning, you heard a clip from Charlie Fremont, our guest this episode. Charlie is an EHR application analyst at Washington University specializing in research billing. Charlie has been in healthcare for 15 years, with 10 of those focusing on research. He has spent the majority of his time on the operation side of research, with extensive experience in IT. Recently, Charlie has leveraged technologies like ChatGPT, Python, and Excel VBA formulas to successfully implement practical solutions to optimize research operations. I sat down with Charlie to talk about how the use of AI in research has led to these practical solutions.
We discuss everything from what generative AI means to practical applications and also limitations and risks inherent to these AI tools. If you’re a novice and not sure where to start with AI, I think this discussion can help remove some of the mystery around this seemingly overwhelming topic. And if you’re an expert already using AI in your current workflows, Charlie provides some practical examples that may ignite further uses in your own organization. So, without further ado, I hope you enjoy my conversation with Charlie. Hey, Charlie. Thank you so much for joining. Thanks for coming on the podcast.
Charlie Fremont: Thanks for having me, Justin. I always enjoy talking with you, and I’m excited to talk about this today.
Justin Osborne: Awesome. Well, to get started, tell us how you got into this research industry and then a little bit about what you’re doing now.
Charlie Fremont: Sure. I think, like a lot of people, it was kind of a convoluted path for me to get into clinical research. It actually started with me moving back home from Savannah. I had been working in mental health facilities, and just one day, I was working out in this powerlifting gym, and these guys approached me and said that they would love to have me work on their psychiatric unit. And it was the best option I had at the time. So I did it, and just by chance, I met someone there who started telling me about clinical research.
Justin Osborne: Interesting.
Charlie Fremont: And it sounded really interesting to me, and it had a lot of promise. So I pretty much just jumped into a post-bacc certificate program for clinical research. And right at the tail end of that, I met a couple of science recruiters. And shortly after, I was able to get hired on in the bone marrow transplantation division as a clinical research coordinator.
So that’s really how I got started in research. And then from there, this really great doctor, his name is Dr. Dandoy, Christopher Dandoy, he helped me get into some different process improvement projects. There was one on eConsent.
Justin Osborne: Okay.
Charlie Fremont: And doing that kind of stuff, it really felt almost magical to me. So I really fell in love with that kind of work. So, from there, I pitched some of the stuff I’d worked on in the process improvement project to the Office Clinical Research. And there, I got certified in Epic Research Billing, thanks to wonderful and talented team that worked there.
I also continued doing some process improvement projects there, in part thanks to COVID. From there, I got hired on at Washington University in St. Louis as a 100% IT-side application analyst. And that’s where I’m today. And I also do some consulting work in my free time.
Justin Osborne: That’s awesome. So from powerlifting to operations and research billing. That’s fantastic.
Charlie Fremont: Yeah.
Justin Osborne: Well, that’s really interesting. And the topic again that we’re going to sort of dive into is AI, and that’s so broad and it’s so… such a general term used these days.
So I guess to sort of set the stage for our conversation here, I was wondering if you could kind of help us… from the sort of tech operation side, define some of these AI terms. So I do want to focus on what the kids are calling generative AI, but can you kind of help us understand what that is versus the other broad AI terms that are used?
Charlie Fremont: Yeah. First off, I’m really glad you asked this question. I’ve more just been using it, so I hadn’t really thought about articulating or delineating that in the first place. You could separate it by generative AI kind of focuses on creating new content. My specific experience with generative AI is with ChatGPT.
Justin Osborne: Okay.
Charlie Fremont: So that’s a program that’s owned by OpenAI. It’s a large language model, a type of neural network. And what it does is it generates human-like texts based on prompts or requests that you send it. And then the most interesting and most useful thing to me is that the generative AI can empower subject matter experts to create their own programs if they put the time into it. So I see it as serving as the translator between human language and computer language, also known as coding or computer programming language.
Justin Osborne: Okay.
Charlie Fremont: And also, yeah, it can help reduce time on tasks that are pretty time-intensive but not necessarily a deep level kind of work. So it can save people time.
Justin Osborne: So that’s generative AI, right. So what are some of the other AI, I guess, types?
Charlie Fremont: Yeah. Yeah. The other type is predictive AI, or it can just recognize patterns and then build upon that pattern. So it’s less about creating new things. The other thing that generative AI can do that’s pretty exciting but not quite as pertinent to my work is the actual image creation or movie creation, art creation.
Justin Osborne: Oh, that’s interesting.
Charlie Fremont: So that would definitely separate it from older types of AI.
Justin Osborne: Okay. Okay. So I guess knowing that, and I want to get… dig in more to a lot of what you just said with the specifics, but before we go deeper, I’m interested in your thoughts on this in terms of the adoption of AI within research, right.
Over the years, past few years, when AI has blown up everywhere, in research, we are always thinking of ourselves as being innovative by nature. But I do feel like if you look at the adoption curve, most of us working in research are still in the sort of late majority phase, I feel like. But what’s your experience been with AI and its use?
Charlie Fremont: So yeah, that’s really interesting. I feel like the lag time that you touched on, I feel like I’ve noticed that a little bit in other areas too. So I definitely would agree with that sentiment.
However, I presented on this the other day at a Research Billing Compliance Summit, a virtual summit. And in preparing for that, one of the things that I read on was Deloitte had done a poll, and surprisingly, 75% of leading healthcare companies have already started incorporating generative AI in their operations.
Justin Osborne: Interesting.
Charlie Fremont: When I asked the audience that, most thought… if I’m recalling correctly, most thought that it was around 30%.
Justin Osborne: Okay.
Charlie Fremont: So yeah, it’s actually-
Justin Osborne: So there’s a disconnect?
Charlie Fremont: Well, that’s healthcare. Healthcare in general.
Justin Osborne: Okay. Yeah. Not specific to research, but more on the clinical side, healthcare is utilizing AI a lot more. Yeah, I feel like this is the… AI is sort of coming at research from the top down, right. It’s coming into organizations at this theoretical highest level. Where’s the most impact that we can see it? But then as we kind of start to drill down into the people that are actually doing the jobs and doing the work itself. So can you walk us through some of these real-life examples and use cases that you’ve developed in your role?
Charlie Fremont: Personally, I have used generative AI to help me create a program that was able to double my… I build billing calendars, and Epic is one of the things I do. It helped me to conservatively double my output. So-
Justin Osborne: Wow.
Charlie Fremont: … it’s kind of given me the reach probably of maybe two to three builders, depending on their experience level, like maybe three brand new builders or something. And then I also wanted to add. At the end of the presentation I gave yesterday, I asked the audience… my second poll question was, “Do you see yourselves…”
This was research billing compliance. So most of the people there were… are involved in research. I asked them if they saw themselves using generative AI in their own work in the next year, and that was answered by 150 people or so. And the result was 70% said yes.
Justin Osborne: Oh, that’s good.
Charlie Fremont: So I thought that was really interesting.
Justin Osborne: That is interesting.
Charlie Fremont: It is worth noting. I think one of the reasons for slower adoption curve for research is that we really do need to run a lot of things by legal and compliance before we do any stuff… any kind of stuff with this technology. So I think that might be one of the reasons it’s a bit slower and a necessary one. We’re pretty… It’s a pretty regulated industry.
Justin Osborne: That’s true. That’s true. Yeah. And I think that that’s a… The cautious approach, I think, makes sense. That’s how we approach almost everything in this industry. So that’s a good point. I know that you said that you have been… you’ve used generative AI to help you build these calendars. Can you talk us through a little bit more detail about some of these real-life examples or case studies that you’ve seen in your role?
Charlie Fremont: What I’ve done with it with generative AI is, specifically, with ChatGPT, I use it as a translator, as I kind of touched on before. Essentially, I could take logical statements, which are… can be considered pseudocode or quasi-code, and I don’t know Python personally.
So what ChatGPT would do is make the code for me that I could then run on my local computer safely. That program’s not communicating with the internet or anything. And that allowed me to build these billing calendars, an accelerated rate with less errors.
Justin Osborne: Okay.
Charlie Fremont: So it really… it happened at the perfect time. It really helped me be more productive in a time where we had an outage, and I was able to more than cover that. I think a prior record was maybe 20 builds or something in a 14-day period, and I was able to go past 60 in that timeframe.
Justin Osborne: Wow. Wow. And that’s because of this tool. There’s really no other. So just to help define terms here again because I’m not the techie myself. So you mentioned Python.
Charlie Fremont: Yeah.
Justin Osborne: That’s not a Harry Potter reference. What are you talking about?
Charlie Fremont: Oh, yeah. No, sorry. Python is a popular computer programming language, especially in the fields of AI data analysis and apparently cloud programming, which I don’t know much about that one, but…
Justin Osborne: Okay.
Charlie Fremont: Yeah, so it’s a really popular language for that kind of stuff. And yeah, I’m not fluent in that language, so that’s what ChatGPT helped me with. It was able to give me the actual code. I could take my idea, turn it into code with ChatGPT, and then run that code, run that Python code on my own computer safely without communicating any data out to the internet.
Justin Osborne: So to kind of stick on this Python thing, this to me seems like, and this has always… there’s always been a disconnect, at least in my experience, working at different institutions and organizations and research, that there is usually a disconnect between the IT folks, the people who know the code and then the research staff, because there’s a language barrier, there’s a…
Charlie Fremont: Yeah, yeah.
Justin Osborne: The IT folks aren’t necessarily researchers obviously. And so this, to me, seems like, at the very least, this tool is building bridges between the two. Not that it’s pushing any side out to say this isn’t necessary anymore, but the fact that you can use this tool to kind of develop this coding, I mean, that seems like a big deal.
Charlie Fremont: Yeah, I agree 100%. I think that’s the most powerful thing because like the game of telephone, there’s always-
Justin Osborne: Yes.
Charlie Fremont: … so much lost in translation, and people on both sides of the work are brilliant, but it’s like the person in computer programming doesn’t… you’re probably describing an alien world to them and vice versa.
Justin Osborne: Right.
Charlie Fremont: So I think it can really cut down the time delay between having ideas and executing them. So beyond helping subject matter experts make programs, it’s really useful in things like summarizing transcript files, assuming that transcript file is public access. So any kind of work that is time intensive but not necessarily extremely complex or deep work, I would say.
Justin Osborne: Okay. Well, that’s good, and I mean, this is really what I want to get to is to sort of highlight all the different uses of this tool because I feel like right now we’re seeing a lot about AI. It’s obviously everywhere, and it’s the number one topic I feel like in research that people are talking about all the time, but it’s so often just talked about in such high theoretical terms like I said before, and I feel like it’s talking about large data set reporting or emulating clinical trials, which is fine, and that’s all important stuff.
But I’m more interested, personally, in the sort of day-to-day stuff, right. Everything that you’ve described so far, how can people listening to this that everybody works in a different role in research, what are some ways that they’re probably thinking about how generative AI could possibly not just remove barriers, but like you said, it is just the fact that you’re able to do things more efficiently and therefore get more work done, which is the goal, right. We’re all trying to do more work and less time.
Charlie Fremont: Absolutely. Yeah. I think kind of one way to think about it is, you know how people were when Wikipedia first came out?
Justin Osborne: Yeah.
Charlie Fremont: People were like, “Oh yeah, you absolutely can’t trust that at all. You might as well not even use it.” But now, most people think it’s a fine place to start, right.
Justin Osborne: Oh, absolutely.
Charlie Fremont: You can almost see ChatGPT as sort of mixed between a search engine and Wikipedia that’s actionable. So I think it’s a great place to get started and a great place to where you’re not breaking your workflow. It’s able to continuously help you.
For example, I didn’t know anything about setting up Python on my computer or anything like that, and then I forgot a couple of things about Excel. It’s great for helping with all that kind of stuff. So I think it can really help people stay on track and not get lost in little trip kind of ups.
Justin Osborne: Well, and so-
Charlie Fremont: I think it’s great for that.
Justin Osborne: … help us understand a little bit. I mean, again, because you’ve been using this, and you’ve been in this world for a while now and created a couple of things that have been… had a meaningful impact on, especially the clinical research billing stuff.
Help us understand, for those of us that don’t use this tool, what does it mean when you say that it’s sort of a search engine mixed with Wikipedia and that it’s… Where does the information come from? I mean, I know that they… that what do they call this? A black box for…
Charlie Fremont: Yeah.
Justin Osborne: They like… Explain that to us.
Charlie Fremont: Anything… Yeah, you could… Okay. You could think of it sort of as it’s something that can scrape the web from anything that’s published and start to form these networks or libraries. So it has vast amounts of data in there. It learns from every interaction. So anything you put into it’s learning from. So, in that regard, say yes.
Say I wanted to look up how to set up Python properly. I could do that from a search engine and looking through articles carefully or watching a video. Say the video is 15 minutes long. I could get the same answer likely from ChatGPT in 30 seconds. So, in that regard, I can kind of see it as almost a souped-up search engine, and it can compile all the potentially useful pieces of information and help you act on it.
Justin Osborne: So then… So as I understand the black box side of this that you don’t necessarily… nobody can really see on OpenAI side how their algorithms are built. You don’t know where they’re pulling information. So I guess-
Charlie Fremont: Right.
Justin Osborne: … to your point, how do I feel comfortable knowing that when I search on ChatGPT, for example, how to set up Python on my computer, that it’s actually pulling the right information. Because again, if you do a Google search, you’re bound to find articles or information that’s not necessarily 100% accurate, right. So if that’s all built into the algorithm, what is your… what’s the method to, I guess, verify?
Charlie Fremont: So, with creating computer programs, my method was to do extensive testing and break testing. Break testing is when you intentionally use input data that would not be expected by the program and see how it reacts.
Justin Osborne: Oh, okay.
Charlie Fremont: So really lots of time testing. So I think, in general, the way to do it would be if you’re using generative AI to help you get started with something, you’re inherently going to be spending more time verifying. So it’s a great way to get started.
But yeah, I wouldn’t tell people to run with it wholesale. But for staying on that Python example, the way I would verify it is if it told me to do something and it wasn’t there, it didn’t work, I know that it doesn’t work.
Justin Osborne: Yeah, yeah.
Charlie Fremont: But if I ask it a data question, I wouldn’t want to just run with it. There was a case law lawyer who recently got into hot water for that. He cited a case that didn’t actually exist. ChatGPT had fabricated it.
Justin Osborne: Wow.
Charlie Fremont: Yeah, it’s a great concern.
Justin Osborne: Oh, yeah, yeah. I’m sure all the compliance people listening to this are like, “See, that’s why we don’t use it.”
Charlie Fremont: Yeah.
Justin Osborne: So trust but verify. I mean, that makes sense because even… I feel like even at the end of the day, if you’re Googling something and you’re doing this kind of work on your own, you have to do the same thing, right.
Charlie Fremont: Yeah. Even devil’s advocate, most people aren’t really that research literate. They’ll read a research summary. We don’t know enough about statistics to know that it was the proper method used.
Justin Osborne: That’s true. That’s true. That’s a good point.
Charlie Fremont: So it’s just an article on the internet isn’t super validated either. So just a kind of devil’s advocate argument there.
Justin Osborne: That’s a good point. Well, and I think to that point, it’s Google has become ingrained in our culture now, so that’s an acceptable thing. Kind of like you’re talking about the sort of-
Charlie Fremont: Yeah, Wikipedia.
Justin Osborne: … evolution of Wikipedia, right. And I feel, eventually, we’ll get to the point where AI is the same thing. It’s just part of how you look stuff up and how we gain knowledge, and that sort of trust behind it is always going to be a little iffy, I feel like. But we’re just not there yet. We’re not to the point where you can say Google it, and you’ll get that information.
Charlie Fremont: Yeah. And it’s extra tough like you touched on. I believe OpenAI’s algorithms are proprietary.
Justin Osborne: Of course. Right.
Charlie Fremont: So there’s… Yeah.
Justin Osborne: So we’ll never… Yeah. Yeah. Yeah.
Charlie Fremont: Yeah. So that is… it is tough. Yeah.
Justin Osborne: Yeah. That’s interesting. Well, okay, so I do want to… we’ve kind of mentioned some of the, I guess, downsides. I did want to ask you because, again, like we said, everyone in this industry has been obsessed with AI recently, right. And it’s if you want to present at a conference, all you have to do is mention something about AI in your topic, and they’ll say, “But come on.”
Charlie Fremont: Yeah. That is pretty much exactly how it happened for me.
Justin Osborne: Well, yeah, it’s a hot topic. But I also feel like we’re at the same time seeing a little bit of a deflation in the hype of AI. I think we’re sort of leveling out a little bit. So I wanted you to talk a little bit about some of the limitations. You’ve hit on some of them, but can you talk through some of the limitations specifically in a research setting?
Charlie Fremont: Yeah, I think we’re kind of in a unique spot where, based on that 75% use figure from Deloitte, if we don’t embrace AI, I think we could… people could get left behind in the dust, right. Sort of an analogy I was thinking of was say we had a company and we refused to use semi as we used horses to transport goods, it’s not going to go too well for us.
But on the same note, if you kind of view generative AI as a rocket, if that starts to go a little bit off course, you can end up way further off than a horse would in that time. So it’s kind of one of those things where we can’t really afford to not use it, but we do have to be careful. So I think it’s really a question of institutional policy and safer use, like coming up with safer use guidelines, I think.
Justin Osborne: Yeah. No, that’s a good point. And I mean, it is inevitable, but at the same time, I know that there, especially from the compliance side, understandably, there is some hesitation with this tool. And I think some of that, from what I’ve heard, is kind of back to your point about not necessarily knowing where the information is coming and that it’s…
Charlie Fremont: Yeah, yeah.
Justin Osborne: Talk about when you put something into, say… Just, again, using ChatGPT as the example here. When you throw something in there, it is a… as far as I know, there’s a free version or whatever, and it’s learning from all the inputs that go out there.
Charlie Fremont: Right.
Justin Osborne: So if I put data in there that shouldn’t go in there, you can’t get it out.
Charlie Fremont: Right. That, to me, and I’m glad that you said that, I think that is the absolute biggest risk when using ChatGPT. I have heard about some people call it garden instances or one-way gate setups with their API configuration. So that would be another answer. The garden setup is kind of what it sounds like. You can pull information in, but whatever you’re putting in shouldn’t… should not go out.
So I think that’s kind of the answer to that biggest risk. From an individual perspective, the biggest thing would be do not put any data in there that’s confidential, proprietary. Of course, PHI should never go in there. If you kind of look at ChatGPT as you touched on a black box or a Hive Mind or anybody that is really into Star Trek if you know The Borg Collective, anything you put in there is going to be assimilated into that collective. So I see that as the biggest risk. You might as well see it called data leaks would be the concern.
Justin Osborne: Okay.
Charlie Fremont: So essentially, don’t put any of that stuff in there or use an API instance that your institution would likely have to pay for and set up.
Justin Osborne: Yeah. Well, and I can see where the risk comes in because, again, there’s such a disconnect, I feel like, from the institutional policy level. In a corporate compliance office, we’ll use healthcare setting, right. So corporate compliance office has a policy, maybe, for example, that you obviously shouldn’t put PHI data into this kind of setting, but when you get down to the ground, and you have… say your team, the IT folks are like, “Hey, we need to build this report. Can you run this report for me?”
And you ask one of the staff to do it, that’s a long way from the corporate level policies that people are reading. So then they’re like, “Hey, I’ve used this in personal uses, and I know that it’s helpful. I’m just going to use this tool to run this data real quick.” And they plug in the report and they get an efficient result. But then, now that data’s out there.
Charlie Fremont: Yeah. So definitely refer back to that disclaimer I said before. But also, with my experience, the way around that too would be, you can just… That’s why I was asking ChatGPT to make Python code for me. And that way, I could just tell it where the information was in my document and to help me make a program that I could run on my computer even if my computer was disconnected from the internet.
Justin Osborne: Got it.
Charlie Fremont: So none of that’s going out. I told it where it lived in the Excel sheet, but I’m not giving it any of that data, right.
Justin Osborne: Got it. Got it. Yeah, that makes sense.
Charlie Fremont: Or even using mock data to begin with, you know, mock data in a certain format.
Justin Osborne: Yeah.
Charlie Fremont: So there are different ways of going about it to be sure that you’re secure. But yeah, I think that’s a good example that I can run this program without it even being connected to the internet. So in no way is anything that I’m handling going out.
Justin Osborne: Well, I like that. I mean, I feel like an important piece that is sort of going without being said here is the intentionality behind it, right. So this is a very powerful tool that we’re talking about, obviously, and it’s the capabilities seem endless at this time, but unless you’re actually, like you’re doing, taking the time to think through the considerations of this-
Charlie Fremont: I really do.
Justin Osborne: Yeah. I feel like that’s important for people understand and take out of this, that you… this isn’t just something that you just kind of…
Charlie Fremont: Yeah, it’s easy. I mean, I think I was thinking of a couple of different examples.
Justin Osborne: Yeah.
Charlie Fremont: One would be, I think weapons are necessary to maintain peace, but they can also contribute greatly to violence.
Justin Osborne: Right.
Charlie Fremont: So it’s sort of, like you said, intentionality. The other example would be, okay, you know about HIPAA violations?
Justin Osborne: Yeah.
Charlie Fremont: You could do a HIPAA violation on yourself if you looked yourself up in your Epic chart, right.
Justin Osborne: Yes.
Charlie Fremont: And you could do that in two seconds now. Think about how much more intentional you’d have to be back when we used paper records to find your own chart-
Justin Osborne: Oh, yeah.
Charlie Fremont: … and open it up and look at it, right.
Justin Osborne: That’s true.
Charlie Fremont: So yeah, the ease and speed of use is kind of what makes it a bit scarier in that regard. But yeah.
Justin Osborne: That’s a good point.
Charlie Fremont: If you slow down before you do anything and really think about it, I think that’s how you mitigate that.
Justin Osborne: No, that’s a great point. And actually, that kind of makes me think of this… another question along these same lines of, people have been talking about the AI capabilities and whatnot and how easy it is to summarize documents, like you said, right-
Charlie Fremont: Yeah.
Justin Osborne: … and learn from things, whatever. And a lot of people argue that it’s… this generative AI stuff is going to make us even more dumb down. But it does, to your point, offer another opportunity to sort of be more thoughtful and have more personal reflection on this if you’re taking your time and being intentional. So can you kind of give me your thoughts on that? Do you think that this tool is going to make us all dumber, lazier, or is it going to have a sort of opposite effect or somewhere in the middle?
Charlie Fremont: I absolutely love this question for a few reasons, and I’m really glad that you mentioned. And also it is… it’s really… in my mind, it is the number two risk. So I think that it’s great that you asked it. One reason I love the question is the over-dependency thing reminds me of the movie Idiocracy. If you’ve ever seen it, or for anybody-
Justin Osborne: Yeah, there’s the two of us that have seen that movie. Yes.
Charlie Fremont: Yeah. So, in the future, this guy gets transported to the future where everyone has collectively become dumber because of technology partially, and they’re watering their plants with Gatorade. So they’ve overextended the idea that, “Oh, well, since it’s good for us, it must be great for the plants.” And I think in that regard, as I touched on before, generative AI is a great springboard or something to get started with quickly.
But yeah, it’s not necessarily a great finished product. So kind of, as I said before, I think it’s great to use, but then you do have to spend more time checking it. Or say in the case of making a program, the ideal setup would be for a team to have a computer programmer that could then really look it over and make sure it’s okay. To have that expert do the validation would be ideal I think.
Justin Osborne: I love that. And I actually think that that is… that goes again to the disruptive nature of this tool, I feel like, is not… Because, again, a lot of people are concerned about the… how it could eliminate roles-
Charlie Fremont: Oh.
Justin Osborne: … and eliminate things.
Charlie Fremont: Yeah.
Justin Osborne: I don’t necessarily see that. I think your answer just highlighted that it just changes the roles, right. You still need the IT folks to validate. You still need their expertise to help with that. It’s not going to replace those things. It just might… The roles and responsibilities might shift a little bit because we have this tool now, right.
Charlie Fremont: Yeah. Instead of seeing it as completely replacing people, this fear always comes up with new technology hype cycles. Even in the advent of conveyor belt systems and different automated building processes, like, yeah, there’s less manual builders, but there’s now people that have to maintain and run the machines.
And I was thinking you could kind of see generative AI is not just straight up eliminating people, but giving yourself extra arms like Dr. Ock from Marvel. It’s increasing your reach or your output, your ability to gain more volume. And you could see that on an institutional level as well, I think.
Justin Osborne: I like that. I like analogy. Well, and I think that that really does highlight, again, how this is… it can be a game changer and shift things as long as people are understanding not just the capabilities but also the limitations, like we talked about, right.
Charlie Fremont: Yes.
Justin Osborne: I think to go back to some of your examples, if I had, for example… I’m just thinking about real use case, not to keep pointing out the sort of concerning options here. Say I’m in an educational role, and I’m charged with creating education around something, I would… as long as the proper guardrails were up and whatnot, I would love to have a tool like this where I could upload a protocol and say, “Give me the point so I can create a training document or a-”
Charlie Fremont: Oh, yeah. Totally.
Justin Osborne: “… Training program based on this protocol.” Is that out of reach for a program like this or a tool like this?
Charlie Fremont: I don’t think so. For example, I was able to transcribe a large… well, it started out as a large audio file into a written document and then take that written document, the summary, and turn it into a PowerPoint in a couple of minutes. So I don’t think that’s at all. It wasn’t the absolute best PowerPoint, but it was a start in a very short amount of time. So, at least for me personally, and I think a lot of people, the hardest part’s getting started. So-
Justin Osborne: Yes.
Charlie Fremont: … to get that out of the way and then focus your time on refining it and making it better or validating it, I think, is a much better use of time than sitting around with writer’s block.
Justin Osborne: Absolutely. No, I like that. Well, and I think to sort of jump back to your Epic examples that you’ve used in your own work, you created this program for Washington University, right?
Charlie Fremont: Yeah. Yeah. Correct.
Justin Osborne: But when you first used it for that, you did not just say, “Okay, this is what I’ve used, and now you’re super efficient, and you look great at your job,” but you actually shared this. You shared your tool that you’ve created, and now you’ve talked at a conference about it. And I will add on here too, that this is sort of what I like about our research industry is that when somebody does come up with something that works that we think could help somebody else, we’re very willing to share that information because no reason to have people not to advantage of this.
Charlie Fremont: Yeah, absolutely. Yeah.
Justin Osborne: So talk to me about your mindset and just what’s sort of driving you to get out here and share this kind of information with people about how you’ve been successfully using AI.
Charlie Fremont: Yeah, I think that’s a great question. And it is funny because somebody did ask me yesterday when I presented, “Is this patentable?” And it might be. It’s just when I… when you think about Epic, and you think about just research setups in general, it’s often… it’s so specific to the institution, their exact setup. I don’t know if my specific program is universal enough to really be pertinent to that many institutions, but the process that I made it is.
Justin Osborne: Okay. Yeah.
Charlie Fremont: Though that’s kind of my thoughts on it, and I don’t want to gatekeep that information. I think, like you said, in the spirit of healthcare, it’s great to be able to give back and try to help the field if I can.
Justin Osborne: Yeah. No, I love that. I think that’s exactly what sort of drives people to move forward in this stuff anyway, right. And this is how innovations happen. So to jump back, I know we’re going back and forth about the compliance risks and also the amazing uses of this tool, but help us understand, I guess, do you think that we’ll get to sort of a proper balance between the amazing capabilities of this tool and also remaining compliant?
Charlie Fremont: I really do. I really, really do. So in the field of computer programming, I learned hacking can actually be considered people who are hastily putting together programs as a sort of proof of concept versus what you see in movies. There’s also hackers that infiltrate systems, right.
Justin Osborne: Yeah, yeah.
Charlie Fremont: But this just means kind of hack things together. Get it together, make it work as quick as possible, right.
Justin Osborne: Okay.
Charlie Fremont: But I think that’s really where… From a perspective of making programs, I think that’s really where generative AI is useful as opposed to displacing computer programmers. So that kind of brings me to the balance question.
In the near future, I can really envision teams where it’s an ultra-lean team, where there’s subject matter experts on the team that have been trained on how to do this kind of thing with generative AI, and then having maybe a few project managers and a few computer programmers on the team to refine the proof of concepts hacked together programs, the ideas.
Justin Osborne: Yeah.
Charlie Fremont: The ideas and action. And I wouldn’t be surprised if small ultra-lean teams that could outperform larger, more unwieldy teams not using this new technology. I wouldn’t be surprised at all. And I think that’s where the balance would come into play, coupled with compliance and legal oversight.
Justin Osborne: Yes. No, that last caveat, man, that’s what caught everybody’s attention, I think. I think the compliance and legal people were like, “Am I on this lean team?”
Charlie Fremont: Yeah, yeah.
Justin Osborne: No, I…
Charlie Fremont: Maybe put that first. Put them in front of all of that. Yeah.
Justin Osborne: Yes. No, but I think-
Charlie Fremont: Absolutely.
Justin Osborne: … you’re right. I think that that is… that sounds like the direction that we’re heading anyway, to go back to shifting roles and responsibilities. But it does sound like you said, you’ve given a lot of examples here just from any other industry, really, and innovation that things are constantly changing, and this is just one more example of it. There has to be compliance oversight and guardrails around using a tool that’s this powerful and has this many unknowns. But I do… it does sound like the benefits outweigh the potential risks as long as we’re being intentional.
Charlie Fremont: Yeah. Yeah. I mean, I feel like we have to based on just how much of an advance it can be, but then we absolutely have to do it as safely as possible as well.
Justin Osborne: Yeah. Okay. So, Charlie, I wanted to ask you too, you mentioned something at the very beginning, I think, of this discussion about deep work, you called it.
So before we kind of dig into that piece, I wanted to step back, talked a lot about the practical uses and examples of how you can maybe use this tool to be more efficient and ways that it could shift roles and responsibilities and whatnot, also remaining compliant.
But to kind of step outside of the functional aspects of this tool, I want to talk about the more efficiencies gained from just a sort of work mentality state. So you talk about deep work, can you kind of elaborate on that piece?
Charlie Fremont: Yeah. So there absolutely are studies on this, but some of this, in summary, is my opinion.
Justin Osborne: Okay.
Charlie Fremont: But there’s absolutely studies that indicate that people are generally able to do around four hours of deep work a day, right.
Justin Osborne: Okay. And deep work meaning…
Charlie Fremont: Work that requires a lot of concentration, a lot of intentionality, creative thinking, that kind of thing. And then there’s also shallow work, which is work that you could listen to any kind of music too and not really think that much about necessarily just do on autopilot. If you’re copying and pasting a bunch of things, that could be… From one consistent place to another, that could be considered shallow work, right.
Justin Osborne: Okay.
Charlie Fremont: There’s also work that falls in between it and the work that falls in between it that is sort of breaking up your workflow could be considered medium work, and that draws on your resources, especially if it’s a lot of it. So I would kind of consider the old way that I was building to be medium work.
I had to switch back and forth between different documents and do a lot of scrolling and a lot of making sure my visual acuity was correct and lining up numbers. So my opinion is that in creating this program, I move that work from being medium work to shallow work for me, and now I have more energy in the day to do more human, more deep work.
Justin Osborne: That’s interesting.
Charlie Fremont: Yeah-
Justin Osborne: That’s really interesting.
Charlie Fremont: … a lot of use. Yeah.
Justin Osborne: Well, and I feel like that’s really important because, again, outside of just listing off examples of the uses of the tool and things that people can do in their certain roles, this seems like a way to describe the benefits of AI in a way that obviously applies to anybody in any role.
Charlie Fremont: Yeah, I agree. And also, I should tack on there with that amount of medium work. Before I made the program, especially with an outage, I would’ve been doing more than 40 hours a week. So, in that regard, it could be a tool that helps to prevent burnout and also can keep people having a better work-life balance. If you’re working more than 40 hours a week, these tools might be able to get you back into a healthier situation.
Justin Osborne: No, that’s really important. And having a healthy work balance is, I think, crucial, especially I feel like in research. I’m sure it’s true in a lot of other industries, but there’s a lot of burnout. There’s a lot of burnout in this industry, right. It’s a high-demand, high-intense job that most of us are doing. So I think finding ways to balance that and find tools like this that are out there that can kind of help us navigate that is super important. So I appreciate you going into some of those descriptions.
Charlie Fremont: Yeah. When you talked about burnout, it made me think about when I was a clinical research coordinator.
Justin Osborne: Absolutely.
Charlie Fremont: Just some of the responsibility and workload placed on that. I have a huge amount of respect for anybody involved in clinical research period. But yeah, that was a really tough job. And it’s ironic. I think one of the challenges is the high turnover, and one of the reasons for high turnover is almost high turnover because it’s harder to get trained. It’s harder to have enough time to get trained. So we could use AI, especially those garden instances, to create trained chatbots that might be able to help new coordinators feel like they have more support on some of the processes and stuff.
Justin Osborne: That’s a great… That’s a great point. That’s a great idea. But I feel like this is interesting too, because if you think about this industry and you just brought up coordinators, it’s a high-stress field.
Charlie Fremont: Yeah.
Justin Osborne: And almost everything that we currently do is, just to use coordinators as the example, a lot of what they do is going to fall into, of course, the medium or deep work because you have to… I mean, it’s…
Charlie Fremont: It absolutely is.
Justin Osborne: Yeah. People’s lives depend on it sometimes, and this is very important work and all that stuff. So, from that aspect of it, the system is not built to necessarily manage those three levels of work that you were talking about.
And then now we have this tool that’s introduced that you said that there’s sort of studies out there that have shown this, the impact and moving work down that line a little bit more towards the shallow stuff. That, to me, is a huge game changer in this field that could have a huge effect on research.
Charlie Fremont: So, often we can get pretty siloed, and with the high turnover, people don’t always have a lot of time to think about the process that they’re doing. So you might find yourself doing a lot of redundant data entry and Excel documents that don’t communicate with each other, that aren’t using formulas, that aren’t leveraging efficiencies that are already available, right.
Justin Osborne: Yeah.
Charlie Fremont: So that could be a ton of hours. So some of the things I did was learned index and match in Excel and different things like that. And it was really exciting because that and data exports and stuff like that, they might’ve saved me hundreds of hours.
Justin Osborne: Wow.
Charlie Fremont: So that’s kind of where I see ChatGPT could probably suggest those kind of things. If you said, “Hey, I am doing five hours a day of data entry and 10 different sheets, what could I do? What could help with that?”
Justin Osborne: Yeah.
Charlie Fremont: And it could probably lead you down to making your own Python program-
Justin Osborne: Wow.
Charlie Fremont: … and getting that sorted out. So that would be a real-life example.
Justin Osborne: Oh, that’s fantastic. No, I like that a lot. Again, I feel like there are… there’s so many possibilities to this tool and just the future of what’s going to happen. Well, listen, I really appreciate your time talking through all these specifics and kind of helping us understand a little bit more about the sort of on-the-ground practical uses of AI and some of the risks too. So thank you for talking through this.
Charlie Fremont: You’re welcome. And thank you for having me, Justin. It’s really been a lot of fun getting to talk about this stuff, so I really appreciate it.
Justin Osborne: So we’re back. I’m still sitting here with Charlie, and we just talked about AI for a while, and now I’m going to shift to the Same Team segment. So, Charlie, it’s easy to get bogged down in the weeds of our jobs and specific roles in the research industry, but I do think that most of us stay in this research field because we genuinely believe in doing our part to help move healthcare forward for our friends and family, kind of like you mentioned earlier. So can you tell me a story or an experience from your career that helps connect you with this idea that we’re all in the industry to help people?
Charlie Fremont: Yeah. I kind of have three different instances I was thinking about. One, when we had the whole COVID-19 lockdown I was in the office of clinical research at UC Health, and you could just see everybody buckle down and prioritize those studies. It was really a great thing to see.
So that’s definitely a time when I saw everybody working together and reprioritizing and putting in more hours and having a faster turnaround and that kind of thing. And also, pretty much, it seems like anytime there’s any kind of research billing challenge that just inherently involves a lot of different disciplines. So I’ll see a lot of people group together and pitch in and troubleshoot and try to figure out those challenges.
And then the third example would be I was pretty concerned when I was offered the chance to do a presentation on this topic. I was worried that maybe there wouldn’t really be much support. There could have been a lot of red tape to get through, but everybody on my team and Epic even got back to me real quickly. And nobody had any hangups about the topic, and they were even encouraging about it. So that was a really cool thing to me.
Justin Osborne: That’s great. That’s great. Well, those are all three fantastic examples, and I feel like it’s just a nice reminder, I think, for people to hear that none of us… even though we feel like we’re in silos sometimes, none of us are in silos that we all do have teams that we’re working on and for the greater good, right. That we’re all here for the same reason.
Charlie Fremont: Yeah, definitely.
Justin Osborne: Well, Charlie, again, thank you so much for your time. Thanks for coming on the podcast.
Charlie Fremont: You are very welcome. And thank you for having me.
Justin Osborne: Be sure to follow, like, and subscribe to On Research with CITI Program. If you enjoyed this episode, you may be interested in other podcasts in the CITI Program universe, including On Campus and On Tech Ethics. You can listen to all our podcasts on Apple Podcasts, Spotify, and other streaming services. You should also review our content offerings regularly as we continually add new courses, subscriptions, and webinars. Thanks for listening.
How to Listen and Subscribe to the Podcast
You can find On Research with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2112707.rss” into your your podcast apps.
Recent Episodes
- Season 2 – Episode 8: Politics and Research: Transferable Skills
- Season 2 – Episode 7: Quality Improvement vs. Research: What’s the Difference?
- Season 2 – Episode 6: The Evolution of IRBs: Navigating Ethical Considerations in Research
- Season 2 – Episode 5: Improving Access to Research Studies
Meet the Guest
Charles Fremont, BA – Washington University
Charles Fremont is an Epic EHR Application Analyst, and independent consultant with extensive experience in IT, research and research billing. He’s passionate about driving innovation and improving workflows through process improvement, and technology.
Meet the Host
Justin Osborne, Host, On Research Podcast – HRP Consulting Group
Justin is the host of CITI Program’s On Research Podcast. He has over 16 years of experience in the human subject research field. Justin began his career working for a local IRB and then a commercial IRB. After spending time on the industry side doing business development, he transitioned to research operations as the Director of Clinical Research at an Academic Medical Center and later a community hospital.