Back To Blog

A Closer Look at AI and the Compliance Profession

This interview portion of this article was originally posted in the Journal of Health Care Compliance (JHCC). 

Written by: Andra M. Popa – Assistant Director, Healthcare Content at CITI Program.

Introduction

We are pleased to share a featured interview from the Journal of Healthcare Compliance (JHCC), where their Editor-in-Chief Roy Snell sat down with our own Andra Popa, CITI Program’s Assistant Director of Healthcare Content, to discuss the intersection of artificial intelligence, advanced technology, and healthcare compliance.

In this conversation, Andra explores how emerging technologies are reshaping the compliance landscape, the potential benefits for organizations, and the critical considerations for maintaining ethical and regulatory standards.

The interview highlights not only the challenges posed by innovation but also the opportunities for compliance professionals to lead the way in ensuring responsible adoption of AI in healthcare.

A Closer Look at AI and the Compliance Profession

Snell: What was your first exposure to AI or other advanced technology? Does it meet the hype? What tool did you use and how?

Popa: Before I started my current role at CITI Program in developing healthcare compliance education for learners, my first exposure to advanced technology was using my auditing background and my specialty in Medicare to help build advanced technology. I then used the advanced technology within an audit to find specific diagnosis codes in a dataset. The tool was not integrated into the electronic medical record.

Snell: Will all legal and compliance professionals have to adapt to this new technology, or will it only be used by some in the compliance department?

Popa: Everyone will need to adapt to AI and other advanced technologies. In a short time, tools have been developed that integrate AI and other advanced technologies into everyday workflows, from risk assessments to regulatory monitoring. Legal professionals must understand advanced technology’s implications for contracts, liability, and data governance, while compliance teams should use the programs for real-time auditing and fraud detection.

Snell: Do you think AI will create a new category of audit professionals?

Popa: All audit and monitoring professionals will need to incorporate advanced technology into their work, as a representative sample is no longer sufficient. On June 30, the Justice Department announced an investigation, utilizing data analysis in some cases, where criminal charges were brought against 324 defendants and involving $14.5 billion in “intended losses,” making these the largest healthcare fraud allegations in history.[1] Within auditing and monitoring, there may be professionals with different specialties. Some professionals could focus on validating AI outputs, ensuring model transparency, and conducting audits of audits by hand where advanced technology tools are involved. Some professionals with a technical or computer science background, as well as a sensitivity to ethics issues, could blend traditional auditing expertise with skills in data science, machine learning, ethics, and algorithmic interpretation.

Snell: Are there any podcasts or social media sites that are talking about AI from a compliance or ethics perspective?

Popa: I am a frequent guest co-host on the podcast “On Tech Ethics” with CITI Program. For example, I interviewed a doctor on the podcast who was recently appointed as the Vice President of Clinical Artificial Intelligence at a large hospital system. We discussed advanced technology integration in healthcare, identifying and managing bias that happens if the technology is not trained on a data set, data security related to public advanced technology searches, and ethics. He discussed how the Hippocratic Oath informs his ethics when considering advanced technologies; this is the idea of “do no harm” as applied to a broader audience, rather than just patients.

Snell: What are some risks that AI and other advanced technologies have created? Who should help check the risk and then try to minimize the risk?

Popa: Examples include biased algorithms that may not be trained on a data set that misidentifies fraud. There may also be privacy violations from data leaks in AI and other advanced technology systems; and erroneous recommendations due to flawed data inputs. A multidisciplinary team is essential to check and minimize these risks:

  • compliance officers for operationalizing regulations,
  • data scientists for model validation, and
  • legal experts for liability reviews and regulatory analysis.

Each of these professions should have some sensitivity and training in ethical concerns and spotting ethics issues. A person should be specifically in charge of AI and advanced technology.

Snell: What jobs will be created and lost in broader healthcare, like retail pharmacy, pharmacies, pharmaceutical manufacturing, etc., because of AI and advanced technologies?

Popa: In retail and general pharmacies, AI and advanced technologies could reduce workloads greatly through dispensing, inventory management, and adherence monitoring automation, potentially increasing compliance and reducing the time it takes to fill a prescription. Pharmaceutical manufacturing has been automated and supervised by personnel for some time; repetitive tasks in quality control and packaging might be diminished; however, new jobs may emerge such as AI specialists for drug discovery acceleration, data analysts, and compliance tools. Overall, opportunities in AI and advanced technology oversight, robotics maintenance, and advanced analytics will likely grow.

Snell: What are some of the opportunities that will come from the implementation of AI in auditing? How will the audit department become more effective in both looking for more problems at one time and looking at one problem more deeply? Please be as specific as possible.

Popa: AI opens doors to real-time fraud automated compliance checks. Departments can scan vast datasets simultaneously to spot multiple issues, such as billing anomalies across thousands of claims, using machine learning to flag patterns humans might miss, such as subtle upcoding trends. AI and advanced technology also enable predictive modeling to forecast fraud risks in specific procedures or areas. The volume of data that AI and advanced technology can analyze is vast, while it provides some precision regarding audits.

Snell: What education should the Compliance and Ethics Officer provide to the board with regard to the future of the use of AI in compliance? Please discuss what they should know about the government’s use of AI to look for fraud.

Popa: Compliance and Ethics Officers should educate the board on AI’s role in enhancing compliance through automation and risk prediction, while stressing ethical and secure use. This includes providing education to the board that the government is utilizing data analytics for audits, such as detecting improper payments related to Medicare. Healthcare entities should adopt similar tools preemptively to align with regulatory scrutiny. Further, data security and third-party contractor risk should be reviewed with the board, as well as worker noncompliance by entering even deidentified patient records into public AI and advanced technology tools.

Snell: Please talk a little about human oversight to make sure innocent people aren’t harmed? And how to make sure the AI audits are done respectfully.

Popa: Human oversight is crucial as a safeguard, involving auditors reviewing a sample by hand that AI or advanced technology flagged in an audit, as well as AI and advanced technology outputs for context and fairness to prevent harm to innocent parties. A human ensures decisions incorporate empathy and nuance in the regulations. For respectful audits, implement transparent processes: notify affected parties of AI or advanced technology involvement, allow discussions with the person being accused so that they may explain their perspective, and use explainable advanced technology models to demystify findings.

Snell: Can we use AI in the future to automate the immediate correction of a problem? Can AI help calculate a refund and automate a disclosure? What would lawyers think about this concept?

Popa: AI and advanced technology could automate corrections that could be fraud by instantly flagging errors in a dataset. Yet, for a disclosure, it is important to still work with inside or outside counsel, directing the audit. A trained person is needed to calculate the disclosure. Personal relationships are still important. Due to nuances that lawyers may be aware of related to the opinions of state and federal government regulators, it is essential to allow lawyers who have built relationships with regulators and demonstrated their ethics and trust to craft the disclosure process, along with a team of auditors. Lawyers can also direct an audit under privilege and confidentiality.

Snell: I used to call the charge master the automated fraud machine. If you put in a wrong billing code, the charge master will bill it incorrectly until you discover the problem. Won’t we have similar risks with AI?

Popa: AI or advanced technology could amplify risks if trained on flawed data, perpetuating errors like incorrect codes across audits until detected. Mitigation involves rigorous data validation, regular model audits, and hybrid systems where AI suggests but humans confirm.

Snell: What AI and advanced technology information must be taught to all healthcare employees from a compliance and ethics perspective?

Popa: Employees need basics on AI and advanced technology ethics: recognizing bias, protecting data privacy under HIPAA, and understanding transparency in AI decisions. Compliance training should cover monitoring and reporting AI anomalies, the importance of human oversight, and avoiding over-reliance. Ethically, emphasize fairness, consent in AI use, and accountability to prevent harm, fostering a culture where AI enhances, not replaces, ethical judgment.

Snell: We have to spend 50k on the AI and advanced technology software before the government does.

Popa: The healthcare entity can certainly develop its own AI and advanced technology. It is important to proactively try to have some AI and advanced technology solutions to position the entity ahead of government advancements in fraud detection, allowing internal audits to mirror regulatory capabilities and reduce exposure. It is also important to have redundant systems of software. For example, a large health system may want three or four vendors whose programs perform the same tasks, but in different parts of the entity.

Snell: Does new AI software we purchase have potential risks? How do you check it? You mentioned redundancy as a good check and balance. What do you mean by that?

Popa: Redundancy is important, as if one vendor’s product is suddenly down or has a cyber-attack, the healthcare entity has mitigated risk by having systems that perform a similar function in a different area. These systems can also be deployed in areas that are down, as well, as they are already connected.

Snell: What privacy risks do we have with AI and advanced technology?

Popa: Privacy risks include data breaches from unauthorized access, triangulation of anonymized data to re-identify patients, and misuse in AI and advanced technology training sets. In healthcare, there is a risk of HIPAA and HITECH violations or breaches. A way to mitigate is to have robust encryption, auditing and monitoring, and regular privacy and data security assessments. Misuse of datasets in training AI refers to the improper handling, selection, or application of data used to teach artificial intelligence models. This can include:

Using unconsented data: Incorporating personal or sensitive information, such as patient records, without explicit consent from individuals. Please note that a recent HHS Office of Civil Rights FAQ update should be considered; it notes that PHI may be provided for value-based care arrangements with consent optional.[2]

Using proprietary information: Incorporating information from research protocols from clinical trials that have not been approved by the pharmaceutical company or principal investigator

Biased or unrepresentative data: Relying on datasets that are skewed, incomplete, or unrepresentative of the population, leading to unfair or inaccurate AI and advanced technology outputs.

Inadequate data security: Failing to encrypt or protect data during storage or processing, making it vulnerable to breaches.

Improper data sharing: Sharing training data with third parties (vendors or developers) without proper agreements or anonymization, risking exposure.

Overuse or misapplication: Using data beyond its intended purpose, such as training an AI for unrelated tasks, which can amplify risks or errors.

In AI or advanced technology development, especially for healthcare, datasets often include sensitive information like medical histories or billing records. Misuse can occur unintentionally, such as poor data governance, or deliberately, such as exploiting data for profit, leading to ethical and legal issues. National security issues may also be raised if patient data is exploited. The Health Insurance Portability and Accountability Act (HIPAA) sets strict standards for protecting Protected Health Information (PHI) in the U.S. Misuse of datasets in AI and advanced technology training can violate HIPAA in several ways:

Unauthorized Disclosure:

HIPAA’s Privacy Rule requires PHI to be used or disclosed only with patient authorization or for specific permitted purposes, such as treatment, payment, and healthcare operations.

    • Training AI with PHI without consent, such as using patient records from a hospital database without approval, breaches this rule.
    • For example, if an AI developer uses deidentified data that is later re-identified due to poor anonymization techniques, it could expose PHI, violating HIPAA.

Lack of Safeguards:

The Security Rule mandates technical, physical, and administrative safeguards to protect PHI. Misusing datasets by failing to encrypt them during AI training or storing them insecurely, such as on an unprotected server, can lead to unauthorized access, constituting a HIPAA violation. For example, a data breach during model training due to inadequate security measures could trigger penalties.

Business Associate Agreement (BAA) Violations:

If a third-party AI or advanced technology vendor processes PHI, they must sign a BAA with the covered entity, such as a healthcare provider. Misusing data by exceeding the agreed-upon scope, such as using PHI for purposes beyond auditing, or failing to comply with HIPAA terms, violates this agreement. For example, a vendor training an AI or advanced technology on PHI for marketing purposes instead of compliance would breach the BAA.

Failure to Minimize Data Use: HIPAA requires the “minimum necessary” standard, meaning only the least amount of PHI needed for a task should be used.

Overloading AI training with excessive patient data when a subset would suffice violates this principle. For example, using full medical records to train a model that only needs billing codes could lead to non-compliance.

Some steps to avoid HIPAA violations include the following:

    • Obtain explicit patient consent or use fully de-identified datasets (per HIPAA’s Safe Harbor method).
    • Implement robust encryption and access controls during AI training.
    • Ensure business associate agreements are in place with vendors and regularly
    • audited.
    • Limit data use to the minimum necessary and document compliance efforts. In healthcare AI and advanced technology, similarly to auditing tools, ensuring datasets are handled ethically and legally is critical to leveraging technology without risking patient privacy or regulatory penalties.

Snell: How do we ensure the vendors’ AI products legally take information from the internet?

Popa: Ensure vendors comply with intellectual property laws and data sourcing ethics by requiring transparency in the origins of the training data. Vendors can often partner with entities to obtain an Application Programming Interface (API) access, which allows the vendors’ product to interact with the API information. When applicable, contracts should mandate certifications, such as GDPR alignment for certain foreign entities and indemnification clauses when applicable.

Snell: What is the most important part of a robust AI policy?

Popa: It is vital to provide employees, vendors, and contractors with a clear idea of when AI and advanced technology may be used. Consultants, including attorneys, are an entry point for cyber threat actors, and they often work off-site; these professionals need to be trained not to use public AI search engines to do their work. These individuals should never enter even de-identified patient information in a public AI engine, for example, to create a report and identify patterns. Another aspect is ensuring that vendors have trained their own AI or advanced technology and that it is secure.

Snell: Talk about your idea to teach employees how to build AI tools efficiently, effectively, compliantly, and ethically.

Popa: The idea is to encourage employees to identify points that could be better to build AI and advanced technology tools to perform their work. Employees could also be provided with training on how to build AI and advanced technology tools. While AI and advanced technology platforms can be built without coding experience by the AI itself, these types of tools may run into issues if they need to be supported over time and the person who made the tool does not code.

Regulatory training, such as data security, could also be part of the education. It works very well, led by a person in charge of AI and advanced technology at health care entities who perhaps has a clinical degree, such as a medical doctor, and also has an entrepreneurial background. The clinical degree helps the individual understand the application of the tools, as this person would understand the healthcare entity’s operations.

Snell: What person or department in an organization do you think will be responsible for oversight of the implementation of AI audit tools? In particular, who will ensure that the new AI and advanced technology audit tools are properly developed? Who will be able to audit the AI audit tools knowledgeably and effectively?

Popa: A person in a compliance profession who focuses on auditing and monitoring should audit the tools to ensure they are helpful and compliant.

Snell: What exactly is AI going to do to improve auditing? What is it going to do that isn’t happening now? How exactly will the audit be different from a human and manual audit process? What can AI do that a human can’t?

Popa: AI will enable predictive auditing, real-time anomaly detection, monitoring of security and privacy breaches, monitoring of third-party vendors, and scalable analyses of massive datasets, such as uncovering hidden patterns like fraud that manual processes overlook or do not have the time to audit every single instance. Unlike human audits, which are sample-based and extremely time-intensive, AI and advanced technology processes full populations instantly, flagging risks proactively.

Humans cannot match AI and advanced technology speed in correlating billions of data points or tireless consistency, but AI may lack nuances, decisions made through many different source documents and contextual judgment. This is why it appears that there will be hybrid aspects wherein humans will make some compliance or audit decisions if the AI or advanced technology has below a certain level of confidence.

Snell: What do you think is the most exciting impact AI will have on the future of the compliance and ethics field?

Popa: Perhaps it is due to my auditing background, but the utilization of AI and advanced technology in the field of auditing and monitoring has great potential. An incredible amount of data may be analyzed. Further, for items wherein the AI or advanced technology has a lower confidence score, such as below 98% confidence, the auditing items can go into a queue wherein humans review the items and make more complex decisions. The technology allows people to think about more nuanced issues and leave the simpler determinations to the AI or advanced technology.

Snell: Thank you so much for participating in this interview. You have really provided us with a lot of important information on AI and how our industry can and should be integrating this technology into everyday tasks.

Endnotes

1. https://www.justice.gov/opa/pr/national-health-care-fraud-takedown-results-324-defendants-charged-connection-over-146.

2. https://www.hhs.gov/hipaa/for-professionals/faq/may-health-care-providers-disclose-phi-in-value-based-care-arrangements/index.html.