Season 1 – Episode 33 – Integrating AI into Healthcare Delivery
Discusses the responsible integration of AI into healthcare delivery.
Podcast Chapters
Click to expand/collapse
To easily navigate through our podcast, simply click on the ☰ icon on the player. This will take you straight to the chapter timestamps, allowing you to jump to specific segments and enjoy the parts you’re most interested in.
- Introduction and Guest Background (00:00:03) Host introduces the episode, speakers, and provides background on Andra Popa and Dr. Yauheni Solad.
- Dr. Solad’s Career and Focus (00:01:29) Dr. Solad discusses his career trajectory, early telehealth work, and current focus on AI in healthcare.
- Evolution from Telehealth to Generative AI (00:01:52) Explains the shift from telehealth to integrating generative AI, and the new challenges and opportunities it brings.
- Defining Generative AI and Advanced Technologies (00:04:14) Clarifies what generative AI is, how it works, and its difference from traditional models.
- Bias and Model Alignment Challenges (00:05:24) Discusses the risks of bias in generative AI models and the importance of alignment and vigilance.
- Specialized Models and Black Box Concerns (00:07:11) Covers the use of specialized AI models, the need for context, and transparency in black box systems.
- Risks of Public AI Tools in Healthcare (00:09:00) Warns against using public AI tools for sensitive data and stresses enterprise-level compliance.
- Enterprise AI Compliance and Training (00:09:42) Outlines best practices for compliance, data security, staff training, and permissible AI use cases.
- Ensuring Reliability and Human Oversight (00:13:06) Emphasizes the need for human review, transparency, and guidance on AI-generated outputs.
- Monitoring and Compliance in AI Use (00:14:08) Describes technical and domain-specific oversight, validation, and the importance of expert review.
- Fostering Innovation in Healthcare AI (00:15:21) Discusses encouraging innovation, problem-first culture, and accessible channels for staff ideas.
- Collaboration and Makerspaces for AI Solutions (00:17:03) Highlights cross-disciplinary collaboration, internal makerspaces, and support for project development.
- Integrating AI into Clinical Workflows and Ethical Concerns (00:19:29) Explores integration of AI, data support, and ethical issues like bias, transparency, and accountability.
- AI’s Potential and Human Oversight in Clinical Decisions (00:25:27) Describes AI’s potential for administrative tasks, decision support, and the necessity of human oversight.
- Validation, Protocols, and Patient Disclosure (00:27:26) Stresses local validation, clear protocols, and transparent patient disclosures regarding AI use.
- Effective Communication of AI Use to Patients (00:30:40) Advocates for clear, plain-language disclosures about AI’s role in care without overwhelming patients.
- Explanation of Large Language Models (00:31:45) Defines large language models (LLMs) and their capabilities in healthcare applications.
- AI in Prior Authorization and Workflow Automation (00:32:44) Discusses using AI for prior authorizations, workflow automation, and the need for rigorous validation.
- Continuous Monitoring and Real-World Validation (00:35:22) Explains the importance of ongoing testing, monitoring, and real-world validation of AI tools.
- Reporting Methods and Targeted Auditing (00:39:01) Highlights the need for reporting mechanisms, targeted audits, and proactive risk identification.
- Ethical Foundations in Healthcare AI (00:41:12) Discusses sources of ethics, patient-centered care, and the importance of using AI for good.
- Recommended Resources and Further Reading (00:42:29) Provides books, articles, and online resources for learning about AI and ethics in healthcare.
- The Human Element in AI Integration (00:47:36) Emphasizes maintaining human connection, empathy, and trust as AI is integrated into healthcare.
- Conclusion and Closing Remarks (00:49:14) Wraps up the conversation, thanks the guest, and provides information on further learning opportunities.
Episode Transcript
Click to expand/collapse
Daniel Smith: Welcome to On Tech Ethics with CITI Program. Today, you’re going to hear from my colleague Andra Popa and Dr. Yauheni Solad, about the responsible integration of AI tools into healthcare delivery. Andra is the assistant Director of Healthcare Compliance at CITI Program. Prior to joining CITI Program, Andra led a consulting firm that worked with over 40 healthcare entities to create, assess, audit, and monitor compliance programs. And with that, I’m going to turn it over to Andra.
Andra Popa: Our guest today is Dr. Yauheni Solad, who is the managing partner at Dalos Partners, leading healthcare advanced technology strategy and validation. A research affiliate at Yale University and physician, board certified in clinical informatics and internal medicine. He formerly led digital health innovation at UC Davis Health and Yale New Haven Health, advancing fire interoperability, telemedicine, and responsible AI standards. Before we get started, I want to quickly note that this podcast is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws and regulations that may be discussed in this podcast. In addition, the views expressed in this podcast are solely those of our guest and myself. And on that note, welcome to the podcast Dr. Solad.
Dr. Yauheni Solad: Thank you for having me. It’s a privilege to discuss this particular topic now and especially now when intersection of advanced technology and patient care require constant attention on our ethical compass, and ensuring that innovation truly serves humanity. So, I’m looking forward to our conversation.
Andra Popa: Can you tell us more about yourself and what you’re currently focused on?
Dr. Yauheni Solad: So, I’m currently working on intersection of a technology strategy in healthcare delivery. And historically, my early telehealth work was fundamentally about connection and access. We focus on using technology, video conferencing, remote monitoring to overcome geography, bringing patient together and care closer to the patient and help them to get specialty care where they cannot. It was all about replicating an in-person interaction as best as possible, just virtually. And when the arrival of generative AI happened, we started to realize that more and more, it’s not just about bringing people in the right time when we’re in the visits, it’s, we are not just connecting, we are augmenting and automating our care delivery. And AI introduces the layer of intelligence, interpretation, prediction and tech executions into the digital interactions.
So, around that time is where I started to focus more and more about how we can reliably, sustainably deploy AI in our care delivery, what we need to learn, what questions need to ask to ensure that we’re not just bringing another technical tool into the system, but actually doing it in a way that’s helped both clinicians and the patients experience the best out of these tools. I believe that this evolution dramatically expand the potential benefits from efficiency gain and deeper insights from data and personalized innovation, but also profoundly deepens some of the challenges and ethical consideration we’ll need to discuss. Because we’re not just transforming information anymore the way we used to do in EHR, we’re actually generating it, we’re interpreting it and potentially acting on this information via new AI. And this demands much more sophisticated approach to ensuring fairness, transparency, and accountability, and safety. Something that we always have in our mind, but we never deployed any of that at that particular scale. So, through my career, my perspective involved from focusing on the pipeline, interoperability, telehealth infrastructure, to actually focusing now more on intelligence and the strategy to AI.
Andra Popa: And could you, just for our listeners that might not be familiar with what generative AI means or other advanced technologies that you use in your practice of this telehealth, could you explain further?
Dr. Yauheni Solad: Yeah. So, generative AI is a new way of deep learning, which is a branch of our machine learning that’s help us to learn from vast amount of data and help to predict the next word for the model, it’s called the [inaudible 00:04:44], and help us to learn and generate new data from the previously learned data. I know it sound a little bit complicated, but on the conceptual part, it’s not. Before we try to create a very focused data set and try to select parameters and ensure that we are learning from this particular part of this. What generative AI is doing differently is decision to learn from all of the internet, from all the structured, non-structured data, from a text, from the video, from everything, and create a special purpose foundational model called transformer. And the GPT is nothing more than the transformer that’s helped us now to actually predict new things on multiple domains.
So, the foundational model there are not healthcare specific, there’s not a law specific by itself. Things we’re dealing with in ChatGPT or from Meta, for example, are trained to serve a lot of those use cases. So, what it’s allowed us to do is for the first time our models were doing relatively well under uncertainty, because before, if the system never seen an example, it would likely fail or predict incorrectly. Now, there are other things that the system can rely on trying to predict this. The challenge is if you ever spend time on the internet, you realize how unfortunately biased, unfair the internet is. And a lot of the knowledge that’s somewhere all across the model is actually the knowledge of the whole internet. So, researchers try very hard to ensure we have an alignment of the models with expectations, that we do not project any signal that’s maybe in the data that’s been pulled from internet into the biasing outputs, but it’s not a given. It’s hard work that requires a lot of the vigilance and alignment from all of us.
Andra Popa: So, the information is not just coming from medical devices such as glucose monitors or cardiac information. It’s also coming from this generative AI which is possibly predicting or diagnosing or offering treatment plans. I’ve seen models where they can categorize their own accuracy.
Dr. Yauheni Solad: So, again, we have a wide variety of models and some trying to do specialized models or take foundational models and then fine tune this with particular examples to fit more particular domains. There is a lot of variety, and the models are getting smarter every day. But on a higher level, when you’re dealing with generative AI, let’s imagine that it was not trained particularly on glucose data, then the model will be just taking a role of an expert and I would say expert with quotes on the both sides, and trying to predict step by step based on your instruction, what is C?
And that’s why it’s important to provide the model the right context, like a set of instructions. How to look, what to look for, what are the edge scenarios. And that’s why, especially, for the big black box system or the system you may be getting from the vendor, why it’s so important to actually have traceability and understand what’s going on inside the black box. Because sometimes you may be getting an answer, but in our current state, it’s possibly even more important to understand how the system arrived to this answer than just to get a right answer.
Andra Popa: Right. And my background is compliance, so one of the first things I would advise is to take the analysis of exactly what types of advanced technology are being used in a healthcare setting and then associate the risks posed by each different type to really understand how it’s processing the information to identify how it could go wrong in some ways. But you are referring to more, if you say vendors or black box, these are developed systems, not just the public, ChatGPT. Correct?
Dr. Yauheni Solad: Yeah. And just full transparency, public information or public models or public tools have no place outside of colloquial personal use. In the enterprise it’s risky, and a lot of enterprises need to ensure that there is a zero PHI tolerance for a lot of those tools.
Andra Popa: Yes. One thing that concerns me is that people might not realize, even if they think they’ve de-identified the information and they enter it, it could somehow still be composited into identifiable information at a later point. And also, correct me if I’m wrong, but it’s stored on servers even if you delete the history.
Dr. Yauheni Solad: So, at some point when those tools just arrived, it was relatively challenging for enterprises to ensure they quickly give access to their power users to the tools. The tools were new, the people didn’t really know how to properly deploy. That’s not the case anymore. Almost all of the major cloud vendors and even startups already have a set of tools that’s allow you to achieve the enterprise level compliance. And what it means is, the data stored in an environment where the ChatGPT will not be using your information for the future training data. So, you’ll not have a risk of some of your answers being leaked into the public domain. The second one is, ensure that your data stay encrypting and that you have a proper BEA regulation in place with the vendor because the whole fact that you’re using new tool doesn’t really mean you’re now exempt from the BEA.
So, pretty much you need to have a zero tolerance for PHI in public tools and an ambiguous absolute prohibition of entering any identifiable patient information or sensitive institutional data in a public AI platform. ChatGPT, Bard, Gemini, doesn’t really matter. And you have to explain why. The loss of control, the potential HIPAA violations, data being used for model training. On the same time, you need to define permissible use cases. Clearly outline any extremely limited and approved use cases of public UI. You don’t have to block it. Maybe it’s your decision of enterprise, but where it can be allowed to use and ensure its involves no sensitive data, no sensitive information, and may include brainstorming of generic ideas or maybe summarizing of publicly available documents or researching some of the studies that you’re getting in an open domain. But you need to stress that every time you’re doing anything sensitive, you need to have data anonymization and security training.
And then your enterprise needs to have a mandatory role-specific training. So, explain all of your staff members what constitutes PHI, what constitutes the risks of the data disclosures and specific institutional policy. Because what’s happening is a lot of the workforce now for the first time being exposed to newer tools, even acting on the best behalf of the enterprise, sometimes they can just do something that’s not aligned with a policy. And most importantly because we just talk about what you shouldn’t do, what you should do is actually promote secure alternatives. We are in 2025, you cannot go to your enterprise and say, “We blocked everything. You have no options.” You need to have institution that are vetted, secure internal tools, platforms that strongly align with your policy and operate under appropriate supervision and under all their agreements with BEA and direct your team, maybe redirect from ChatGPT, thanks to this.
And then last but not least, have a clear consequences for violation. You’re not just talking about it, you actually, same way as we are capturing phishing attacks and users who do click, still, on those emails, outline what will be disciplinary actions, what will be additional education. And this just on safety part because an additional level of reliability and realization because errors compile, hallucination is real. You need to give your staff guidance on AI output and create transparency of communication of what is user-generated content, your member of your team answer the question generating things, and what is AI generated? Because reliability and potential bias is output from any AI, private enterprise or public, is real and we stress the need for human reviews, critical judgements. Otherwise, you may be creating additional hole in your Swiss cheese model.
Andra Popa: It should always be in writing the policy and procedure to follow for using this new technology. You touched on a lot of the elements of an effective compliance program, education, policy and procedure, monitoring the use. Should someone be overseeing the use with a medical background?
Dr. Yauheni Solad: So, I’ll answer in several steps. One, someone definitely should be overseeing that. And the first level of your monitoring compliance is definitely technical. The technical compliance of, are we using the right tool? Is the data properly protected? Is it encrypted, in transit and then stored? Are we following the right guidelines? And that’s frankly just a technical layer because as we started to touch in the last question, the output and the confusion that may be inherently adopted by your system based on unsupervised output that you didn’t really cap, can create additional errors, operational, administrative or clinical in this. So, that’s where domain expertise, medical legal or policy come into place. Because if we are dealing with AI-generated content, unfortunately right now we have no better way to truly vet it than SME expert reviews. So, absolutely need to ensure that before you put a lot of those tools in a mass adoption, you create a robust plan for testing, validation and capturing details.
Andra Popa: We discussed earlier how there are ways that you can encourage innovation among clinicians, physicians, administrators, but in this space, if someone had an idea for example, could they say, “I think this would make my life much easier. I would like to create a tool.” Should there be a process or an office that’s in charge of things like that?
Dr. Yauheni Solad: Absolutely. Innovation is a mindset and starting, especially, with a new breed of a technology that’s coming into the market, what’s become critical is not necessarily the technical skills that’s used to lead a lot of those innovation process for the decades now, but clear understanding of the problem you’re trying to solve and steps you need to take to solve it. So, now possibly more than ever, it’s important to be the main expert for the things you’re trying to solve. So, what you need to start with in your system is to ensure you cultivate the problem first culture. You encourage your staff to identify, articulate or operational or clinical problems clearly, not just it’s not great and please fix it, but explain to me why. And frame innovation as a problem-solving, not just tax employment. There is no technology that will come and solve all of this. Let’s work on this together to solve and make it happen.
And then create a accessible data submission channel. You mentioned the maker space, but frankly it can be simple visible pathway for anyone to share their ideas without requiring full form technical solution so we can actually see the signal, what’s going on. But then a lot of innovation being born from the better collaboration and collaboration, not just inside your team, but cross-disciplinary teams, so people understand how to make things happen. So, a lot of this cross-discipline collaboration is critical. And here you can create spaces, you can create workshops, you can allow to have project teams, informal mixers, clinical, administrative, technical. But the goal here is to ensure people connect, talk free out of the judgment biases and brainstorm together. And the critical step after that, it’s something you alluded before because when in the maker space, you have all the appropriate tools.
In the newer era of generative AI tools, we’re talking about more of a internal or virtual maker spaces that’s offering user-friendly tool for experimentation, low-code, no cloud platforms are fantastic. Platform that’s allow non programmers to build simple application, automate workflows, create a data dashboard, turning their idea into the prototypes, and allowing the whole community to test and play with that, visualize the data, visualize the outcomes. And that’s liberating for a lot of clinicians because they don’t have to wait for weeks or months. And that’s also liberating for people in administrative workforces because very often a lot of this is stuck somewhere in between the clinical and IT team. But after that, it’s important to ensure those are not just one-off projects. So you offer support and resources to actually help those project grow and flourish.
You support it from informatics perspective, from IT staff perspective, and you know what you’re learning from, you know what’s your outcome out of here, so if you’re experimenting with the slow fidelity workflows or maybe something done with a no-code [inaudible 00:18:59], you have very clear KPIs, you know where it’s going. It’s not just been run as one-off and then forgotten somewhere. You actually bring it and recognize, and celebrate the efforts by the team who participate and allow people to make it more than just colloquial, technical activities. Allow them to pick up as a long-term redesign projects and something that’s been more uniquely aligned with their function.
Andra Popa: I used to be a consultant and in my auditing work I’ve seen the great amount of errors that can occur if you separate the compliance people, such as medical coders from the software people that design and develop software. How do you integrate generative and agentic AI into clinical workflows? You can support it through data if it’s successful, but what are some ethical concerns that might pose?
Dr. Yauheni Solad: Great questions. Possibly more than one question in one. I’ll start with the concerns first because it’s such an important and hot topic right now. I would say bias amplification is possibly one of the top. Because AI trained on bias data, we know it can perpetrate in even worse health disparity across different populations. So, we need to ensure that we are approaching almost every output with, not just the beginner mind, but with a clear understanding that even though outputs may be currently aligned, it’s ultimately, especially from foundation models being trained on very biased data set. Lack of transparency because it’s a black box, we still have some open models, but not all of them are totally open source. And a lot of this is pretty difficult to understand how exactly AI reaches a conclusion. And when you have an error, it’s erodes trust and it’s make it hard to identify the source or additional errors.
Accountability gaps were determining when the AI is responsible contributed to adverse effect versus when it’s a developer or institution or a human. Because if for example, AI gave you an incorrect answer, that’s actually, technically correct, but you just uploaded the outdated policy document. Who’s responsible? Is it the person who uploaded this incorrect policy and forgot to replace it on your servers or it’s AI fault because AI is supposed to know all the correct answers? Those are real risks and big edge cases on your tails. Of course privacy, we talk about some of this. We are increasing the collection and processing of all of the information, sensitive information, complex interaction information. And we have extremely high risks for new privacy concerns because especially when you start deploying agentic AI system, there are two type of risks. One, for the first time you have full logs of your individual agent activities that’s maybe actually related to individual person’s workflows, so you know exactly what the people may be doing.
But then the whole idea of agentic AI is to add task autonomously. And when you start to act on information autonomously, you’re creating additional layer of the risks. From the clinical perspective, we are starting to realize the positive benefits and I’m a big proponent of the AI, but because we’re talking about the risk, over reliance is a great one an overall impact on a clinical judgment. Over reliance can potentially lead to de-skilling or diagnostic overshadowing. If we getting 99% of the time correct answer from the AI and we’re starting to depend on this too much, so we do not turn on our critical thinking for every output of the model generating. And that one off where it may not be correct or may introduce an error, clinician may miss things because they’ve been seeing the correct output all of the time.
And ensuring that patients are very clear when and how AI is used and where and how the data is used on training AI. Because we are collecting even more data on our interaction, we’re trying to train even smarter foundational models to fine tune models available on the market and patients have the right to know how this data is used to improve their care or maybe determine their access to care or triage them for access to care. And last but not least, I think it’s important to be transparent with your work staff that some task may more effectively done by computers. We’re not talking about all the full job displacements and a lot of your staff will have some of these concerns, especially in the area where AI is showing tremendous promise, your outreach, call center management, some of the information extraction from the documents, but be proactive around this and offer people help in learning AI as a tool because ultimately the combination of the human and AI will win, especially in the [inaudible 00:24:34] details.
Ed Butch: I hope you’re enjoying this episode of On Tech Ethics. If you’re interested in important and diverse topics, the latest trends in the ever-changing landscape of universities, join me, Ed Butch for CITI Program’s original podcast On Campus. New episodes released monthly. Now back to your episode.
Andra Popa: In terms of the bias, I think it’s a different way of thinking of bias. We might think of bias in a certain way, but I’ve seen a bias even against, for example, rural health clinics. It doesn’t detect the nuances of an output or informatics, as you stated. And this last point is ensuring how patients are clear about how AI is used. What potential do you see for that? And also ethical concerns, if any?
Dr. Yauheni Solad: Tremendous potential. It’s not like our healthcare system is a well-oiled machine that’s a sphere accessible and makes both clinicians and patients happy, and do it on a very affordable budget. Our system needs a lot of help. So, the potential for agentic AI in automation of burdensome administrative tasks like prior authorization, scheduling, routine patient follow-up, is immense and highly attractive for efficiency and reducing burnout. And I think this is where we are going to see the quickest adoption because the ambient technology already everywhere and helping doctors to frankly not just document faster. And what I like about this, it’s not just helping doctors to practice better, it’s literally helps to save marriages because the doctors can go home and spend time with family, not just do additional two, three hours of documentations.
But for higher-stake clinical decision-making, the potential exists but require cautious and very rigorous oversight. So, I see agentic AI role primary as a decision support tool or a clinical co-pilot, not autonomous decision maker. And it could synthesize a vast amount of data, identify small patterns, flag risks, or suggest some evidence-based option, but for clinician or administrator or legal reviews. So, from that perspective, to get the maximum amount of value in all of this, we need to ensure that for all of our high risk, we have a human in the loop. I work a lot with clinical processes, so clinicians must always be the final arbiter for the significant clinical decision supports. We have no other models right now. The AI can assist humans retain the ultimate responsibilities. And it’s a separate question about how scalable a lot of this, and I wrote about it on the LinkedIn and it’s a public discussion. But outside of additional burden that we can put on clinicians, reviewing it, for now, humans retain the ultimate responsibilities.
And then we need to ensure that we have exceptional validation standards. So, validation must prove not just accuracy, but actually safety, reliability, specifically for this use cases in a high-stake context. So, the whole fact that you prove something in one of the healthcare systems somewhere, it doesn’t really mean that it’s a great tool. It’s means it’s maybe a great tool. It may not be a great tool. It certainly has more potential that a tool that’s never been validated anywhere, but it’s not enough. You need to ensure it’s been locally validated on your local cohorts and your local patients. And then you have enough information to explain and trace because the decision by itself may be correct, the logic may be wrong. And in this case, you may have multiple correct answers, but then see an extremely incorrect one and you may not be willing to take the risk.
So, clinicians need insights into why AI is making recommendations, how AI arrive to this and how they can evaluate this. And you mentioned it before, a lot of this require clear protocols and guidance. Defined precisely when and how this AI tools should be used, where it’s allowed, where it’s not allowed, what includes triggering some of the additional procedures for auditing or maybe overriding AI suggestions. And how do you continuously monitor and audit your system for this kind of utilization? Do you even know what your users are doing with your system? How closely do you track performance? What’s your process to identify the errors or drift of the model? What is your mechanism for rapid escalation if something is going on?
And the last things that I continue on our discussion from the previous question is explicit disclosures. I think we need to be careful not to ensure we following cookie banner disclosure level, but when we are using advanced technology to make a decision or it’s mostly AI made with human oversight, patients should generally be informed where AI use or play significant informant in their diagnostic treatment, operational and other environments. So, the safety threshold for safety, reliability, ethical deployment is much higher for clinical decision making than for just administrative automation. And we must proceed with caution, prioritizing patient safety above all.
Andra Popa: Where would that be stated that AI was used to make a diagnosis, for example, or treatment plan? Would you say that at the admin level where they’re taking the call to make the appointment or after in the notes?
Dr. Yauheni Solad: We need to ensure it’s clear, transparent without additional burden, which means a lot of those approval and disclosure, especially when you are in pain and want to get access to care fast, and now you need to listen to three, four minutes of a disclosure may not be a great idea. So, we need to ensure there is an appropriate way to inform people something is happening here in context of your call with a clear ability to go deeper. So, I’m not a big fan of very wordy, extremely legalized disclosure that actually may confuse people, harm overall transparency and clarity because people will just not read this, but instead using a plain language, and thankfully LLM is very good on interpreting complex concept in a simple language, ensure that people do understand in what stage what roles AI play here and what kind of additional oversights are being provided by the humans.
Andra Popa: Could you, just for our listeners that don’t know, could you explain what an LLM means in this context?
Dr. Yauheni Solad: Yes. A large language model. So, earlier we talk about the transformer model that allow you to learn from a vast amount of information and then generate new information and data based on the previous pattern. Large language model is one of the type of a transformer models. In this case you parse the internet and learn on the language level. So, the large language models allow you to generate the text and ChatGPT, especially, the earlier version of ChatGPT is a great example of that. It was trained with text and worked with text. Now ChatGPT, especially newer versions are multimodal, which means it can look at your picture and generate text, it can look at your text and generate pictures. And that’s actually also new type of superpowers we’re getting from the AI.
Andra Popa: In terms of the prior authorizations, it seems impossible for a physician or even a team of physicians to handle so many prior authorization reviews. AI or another type of advanced technology can be used to perhaps create work cues, for example. Is that a model that you were suggesting that would work rather than just AI automating every authorization?
Dr. Yauheni Solad: You need a very clear workflow for validation. I think everything is possible and I think as an internal medicine clinician, I love the idea and the promise of AI helping with prior authorization because, frankly, prior authorization is an administrative concept. This is not the concept that’s truly embedded in the way I make my decision or practice. When I prescribe a thing, when I make a decision that a is patient eligible for this, I follow clinical guideline and protocols and most of the time that should be sufficient. So, additional administrative construct on top of that, it’s just a way to communicate my clinical decision making to the insurer to ensure we are aligned. And that was created for reasonable reasons from the insuring perspective, but it’s become a significant administrative burden.
But where I’m going with all of this is ultimately the answer to almost every question, insurance wide, to ask is already inside the medical chart, it’s already inside my clinical decision making. So, asking me additional things are only appropriate when it’s something that’s not properly documented or expanded. So, we have a smaller data set or I see patients for a very short visit. What this also means is that there is a tremendous amount of data search and extraction out of the current EHR that allows the agentic systems to surface the right information in the right format. Because, let’s face it, a lot of the prioritization is driven by insurer desire to get the same information in their own format, fill this form in this way. AI is great on that. You don’t really need medical training and I’m not talking about clinician training, even administrative training to do any of that. It’s a copy/paste and extraction of information. AI is perfect.
But you need to ensure the validation in healthcare follow the rigorous continuous process, mirroring the standards we apply to new drugs and medical devices. So, we do not introduce any errors, hallucination that may end up in patient chart or make any decisions based on this. So, we need to ensure that we define clear purpose and metrics, be specific about the clinical problem AI is addressing and exact outcomes we’re trying to improve. So, in your case of clinical documentation, how much it can truly extract? What’s the number of concepts it’s actually getting from [inaudible 00:35:57] texts or from something that’s more of a structured data and easy referencing? And what’s your success metrics around accuracy? What’s your overall clinical utility? How much time your physician spending troubleshooting the system when it fails. Is it faster for your clinician right now to just follow with the flow instead of spending hours on safety events, reviews, reductions and the workflow impact? What’s your overall testing environment on diverse representative data sets?
What’s going to happen if all of a sudden your AI that’s now surfacing information for the clinical decision or pre-approval have to deal with completely new data set with the rural data set that is never seen before? What are your standards for accuracy, reliability and testing across this? And how do you deploy this in real world settings? So how do you, not just say it’s bad, we cannot validate it, we’re not sure how to do it, but if you get enough evidence that it’s a safe tool that’s maybe meaningfully deployed on your patient cohorts, how do you deploy and validate it in real world settings with the proper oversight? Almost like a mini internal clinical trials to evaluate this on actual clinical workflow. Does it truly improve outcomes, clinical outcomes, administrative outcomes? Does it truly enhance clinical decision making and help the providers to be more effective or is just generating the loading screen in something that’s perceived as automation, but in reality doesn’t really provide any additional benefits?
And then the last, but not least, ensure that you always know what’s the level of usability and how do you properly deploy it in the clinical workflow, and what can you do to proactively assess and escalate, and mitigate from the performance. So, you deploy it in the right context so the physician do not need to retrain or introduce additional new risks of not clicking or handling this new tool directly, which is a new risk. And then if things go not in a way you expect, you not only have a way to find it, but actually proactively to mitigate and monitor. So you are always focusing on safety, reliability, and general value.
Andra Popa: As you stated that it is like a mini clinical trial. One of the elements of a clinical trial would be to present your methods. So, going back to what you earlier said, you have to understand what you’re using exactly to identify the risks. Correct?
Dr. Yauheni Solad: Correct. If you cannot define what you’re trying to achieve, if you do not know your data, if you do not understand the tool, you shouldn’t potentially do this project.
Andra Popa: It seems like one of the elements of the policy and procedures should be a reporting method.
Dr. Yauheni Solad: Anything that’s allow you to properly catch it in time and mitigate or capture it if it’s a lower risk event and do root cause afterwards. But even before that, ensuring that you’re actively test it for details, you’re actively testing for disparity across different demographics, race, sex, gender, socioeconomic one, and you create as many tail scenario as you can come up with and we are getting in more and more of a reference and synthetic data set that’s allow us to do it for many tasks. But unfortunately, that’s not just uniform things. It’s like a lot of the things we’re doing, especially in the clinical part, on top of the predictable risks that you can test for, you always have test specific risks that you need to be proactively identifying and testing for and monitoring afterwards.
Andra Popa: Things used to be, take a random sample, but now it seems very targeted. Also in my work in compliance, it’s very much targeted, audits, targeted monitoring because you can identify the risk areas so much quickly with advanced technology.
Dr. Yauheni Solad: Yes. And to a certain degree, this is also scary because don’t forget we used to review 5%, 10%, maybe 15, 20% of our assets. It’s not the case anymore. Actually, technology, if you set up it in the right way, allows you to monitor for specific things for all of your generative assets. But I don’t mean AI, I mean generated by human. So, your auditing process becomes way broader and more involved, which means you’ll discover things that you didn’t really possibly want to see. And that’s okay. I think it’s also a certain degree of bravery that’s need to come from the organization to ensure that on the first pass on using some of those discoveries strategy, they may find things that are not great.
Andra Popa: I always try to ask this of people whenever they bring up ethics, what are the sources of ethics that you look to?
Dr. Yauheni Solad: That’s where medical training is very helpful because from early age you always learning to act and think for the benefits of the patient and what’s may be best for the patient, and trying not only just to take a role of the patient, but have the true moral compass of helping patients, do not harm and more, around that. And AI is no different. It’s maybe on a different scale right now, but certainly, it’s something that’s allow you from once in a while to exert your expertise across multiple geographies, multiple domain and multiple patients. So, we need to ensure we are acting on behalf of the patients, especially for administrative use cases because it’s very easy to automate already broken processes that’s may decrease your access to care, that may actually increase your costs and make it even more cumbersome for patients to truly understand what’s going on and getting the right access to the right care at a good price. So, we need to ensure that we’re using those tools for good.
Andra Popa: We discussed a wide range of topics. What books or articles or online resources? You did mention you had a LinkedIn article that I will read following this conversation.
Dr. Yauheni Solad: There is so many. A selfish plug, I run a Realign Healthcare podcast, so please stop by www.realignhealthcare.com. And we talk with industry leaders and healthcare administrators, and clinicians, and patient advocates around the way we can leverage the AI revolution to actually properly align incentives with the needs of the patients, clinicians, and society in general. But it’s also depends on your level of academic preferences because if you’re following the more academic route, I have to say, pretty much every professional journal now has a great way to talk about AI and starting to explore a lot of this. I particularly follow my JMEA and Medical Informatics Association. I see a lot of the early signal in those publications that shape the way we think about it. But certainly publication from Lancet Digital Health, Nature Digital Health, as well as medical association, start to talk more about it.
It’s very helpful to go into the book from industry leaders that’s already been doing it for a long time. For example, Eric Topol on the Deep Medicine, where it’s on the verge of predictions versus a true academic thing. So, it’s pretty easy read, but at least you see what some kind of future AI benefits may be coming. What’s AI ethics and medicine, need to be addressed and raised? A lot of the major organizations produce absolutely amazing reports. If you go and read the recent plan from Office of National Coordination, the reports from WHO, National Academy of Medicine and frankly other, somewhat less technical, it’s a fascinating read. You don’t have to read everything, but you can read an overview and summaries, and that helps you to understand the overall risk profiles and where and how things are standing right now.
And then frankly, we are at the point where it’s no longer if, it’s how fast AI will be coming. You need to ensure you have a good solid, basic common understanding of how those tools work. Because that’s your vaccination, immunization against misinformation and frankly, lies that may be coming from the vendors or social media. Because when you understand how the platform works under the hood, you are less likely to expect it to be, I don’t know, sentient, for example or to produce something out of the magic box. You realize it’s very advanced, but still statistics. It’s predictable and it’s based on the data you feed in and the human feedback that shaped the way the systems produce information and present information to you. So, that means every semantic lens on this transition can introduce errors, biases, and you name it.
We often joke that our LLM may be the best anthropological tools because it’s not only a snapshot of the data that it’s been trained on, it’s actually snapshot of the cultural acceptance that the people who are doing alignments informed and instilled on this model the way it can talk about the issues and surface information. So, it’s important to follow and understand the basic things and the kudos to almost all leading publications from Wall Street Journal, New Yorkers to more colloquial things like Wired Magazine, almost all of them already had basic articles on how LLM works, how you can think about it, what are the risks, etc. So, on the basic level, you should be able to explain it to your five-year-old, what’s LLM, how it works.
I think it’s important just to ensure that no matter what the source is, you focus on resources that offer balanced view, acknowledging both promises and perils, and prioritized peer review and expert-led sources because we have a lot of fear of missing out, FOMOS, there are a lot of the venture-backed publications that introduce unrealistic expectations, and maybe that’s fine for less risky areas where you just generate advertising or maybe do something in more of a gaming industry. That’s not our industry. We cannot afford to fail fast and use it as an excuse for not providing the right care. So, we need to be diligent and vigilant.
Andra Popa: Is there anything I haven’t asked you today that you think is important to mention when thinking about the future of advanced technology in healthcare?
Dr. Yauheni Solad: One thing you didn’t ask today? We talk about a lot of things and a lot of them were clinical and technical. I think one critical aspect we didn’t touch about is the human element. Because we integrate sophisticated AI, there is always a risk of technology becoming a barrier, rather than the bridge between clinician and the patient. And we need to ensure that AI is implemented in a way it enhances, not detract from, the human connection, empathy, and trust. And that’s a core of the healing as a clinical profession. And AI should free up clinicians from administrative burden to spend more quality time with patients. And that part is important because very often we want to use this time for the physician to see more patient. And we need to ensure that AI provides insight that facilitates deeper conversation and provide more personalized care.
The goal isn’t just efficiency or accuracy, it’s ultimately is about improving human experiences of care on both sides, on the physician who are now tremendously burned out and need help in staying in the practice, staying motivated, but on the patient side. And we need to actively design workflows to choose technology that supports communications, supports empathy, and partnerships between the care team and the patient. And if we’ll lose that sight of human centered goal, even the most advanced AI will fall short of true potential in healthcare.
Andra Popa: And that is a wonderful place to leave our conversation for today. Thank you again, Dr. Solad.
Dr. Yauheni Solad: Thank you for having me. It was a pleasure.
Daniel Smith: If you enjoyed today’s conversation, I encourage you to check out CITI Program’s other podcasts, courses and webinars. As technology evolves, so does the need for professionals who understand the ethical responsibilities of its development and use. CITI Program offers ethics focused, self-paced courses on AI and other emerging technologies, cybersecurity, data management, and more. These courses will help you enhance your skills, deepen your expertise, and lead with integrity. If you’re not currently affiliated with a subscribing organization, you can sign up as an independent learner. Check out the link in this episode’s description to learn more. And with that, I look forward to bringing you all more conversations on all things tech ethics.
How to Listen and Subscribe to the Podcast
You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.
Recent Episodes
- Season 1 – Episode 32: Modernizing Clinical Trials with ICH E6(R3)
- Season 1 – Episode 31: Fostering AI Literacy
- Season 1 – Episode 30: Importance of Data Privacy Compliance and ESG Reporting
- Season 1 – Episode 29: Citizen or Participatory Science Ethics
Meet the Guest
Yauheni Owen Solad, MD, MHS, MBA, CDH-E, CHCIO – Yale University; Dalos Partners
Dr. Yauheni Solad is Managing Partner at Dalos Partners, leading healthcare AI strategy and validation. A research affiliate at Yale University and physician board-certified in Clinical Informatics, he formerly led digital health innovation at UC Davis Health and Yale, advancing FHIR interoperability, telemedicine and responsible AI standards.
Meet the Host
Daniel Smith, Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program
As Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.
Meet the Guest Co-Host
Andra Popa, JD, LLM, Assistant Director, Healthcare Compliance – CITI Program
Andra M. Popa is the Assistant Director, Healthcare Compliance at CITI Program. She focuses on collaborating with learning professionals to develop healthcare compliance content. Previously, Andra was the owner of a consulting firm that worked with over 40 healthcare entities to create, assess, audit, and monitor compliance programs, as well as to create educational programs. A graduate of Boston College with degrees in English and economics, she also has JD and LLM (healthcare law) degrees from Loyola University Chicago School of Law. She has published over 100 articles, written book chapters, and conducted workshops in design and compliance.