Examining the Ethical and Regulatory Approaches to Artificial Intelligence (AI) in Human Subjects Research
12 October 2020 by Tamiko Eto, MS, CIP - Division of Research, Kaiser Permanente
We are excited to add two new modules to our Technology, Ethics, and Regulations course: “Artificial Intelligence (AI) and Ethics in Human Subjects Research” and “Regulatory Approaches to Artificial Intelligence (AI) in Human Subjects Research!” These in-depth modules review the ethical and regulatory challenges around AI human subjects research (AI HSR).
The modules are the result of an exciting collaborative effort among AI experts, biomedical and AI ethics professionals, Institutional Review Board (IRB) professionals, and privacy experts. Through these modules, learners will gain insight into the latest ethical and regulatory considerations and become familiar with the various risks that come with AI HSR.
Ethical Considerations in AI HSR
The use of AI is evolving so rapidly that we often forget about its application in our everyday lives; from basic internet search engines to sophisticated machine learning (ML) software applications, it has provided several positive and meaningful contributions to our overall health and well-being. However, with these rapid advancements, come many known and unknown risks.
When evaluating research involving AI, we are obligated to not only consider the risks from its technical dimensions and potential uses, but also the social and ethical implications of the technology in specific context. One common ethical concern is the risk of bias in datasets. AI, specifically ML techniques, relies heavily on the use of data collected from humans and it is within these datasets that the majority of the risks in AI HSR exist. For example, collecting data from limited populations that do not accurately reflect the target population and using that data to train an algorithm, can lead to perpetuation and even amplification of prejudice and discrimination.
Within these modules, we discuss other common ethical issues in AI HSR, the relevant ethical principles related to some of the common issues in AI HSR, and offer additional insight into the extent to which current ethical approaches to human subjects research resolve the issues that AI HSR poses and where they may need to reexamined.
In these modules, we will also present the current regulations related to AI HSR and walk the learners through the typical IRB review process using examples of common types of AI HSR, including commercial and federally funded, military-focused, and U.S. Food and Drug Administration (FDA) regulated AI HSR. Utilizing the Office for Human Research Protection’s (OHRP) newly revised decision tree, we guide learners through the process of determining when a study is human subjects research; if it meets exempt, expedited, or full board review criteria; and what other policies and regulations may apply. As we explain how the current regulations can apply in the context of AI HSR, we also discuss how these unique attributes raise ethical concerns due to the limitations of and contradictions toward the current regulatory framework.
Current Regulatory Limitations
While IRBs can review AI research applications under the current regulations, the current definitions and regulatory review categories have not adequately adapted to this rapidly evolving technology, especially in consideration of the unique characteristics of AI/ML-related research.
By using existing guidance, important risks and ethical considerations may be overlooked, placing research participants, and possibly society at greater harm. Without updated guidance at both the institutional and federal level, this means that IRBs are reviewing and making determinations under older, traditional interpretations of research with human subjects.
Moreover, with outdated definitions, the ability to accurately identify how and when the data (and research participants) utilized in these AI research applications fit within the current oversight requirements is compromised, resulting in significant inconsistencies and insufficiencies in the application of human subject protections.
Some common challenges have been:
Making human subjects research determinations (for example, what requires oversight and what does not)
Assigning the protocol to an appropriate level of review (for example, exempt, expedited, or full board)
Sufficiently ensuring the appropriate data privacy and confidentiality protections
Providing for a fully informed consent process
The current regulatory guidance has served as a strong and reliable source for human research protections for decades. As research continues to evolve, now is the time to update the guidance to address and include this very unique and rapidly evolving field of research.
In order to ensure that AI HSR is conducted ethically, the research ethics community needs to rethink the currently available tools and revise them for better guidance. To be effective, the review and discussion of such research should include subject matter experts (for example, AI and privacy experts) participating in the IRB’s discussion. Only then can we create adequate guidelines for AI HSR, develop trainings for IRB members and AI researchers, and help shape policy going forward.
The authors of these new modules present their interpretation of the current regulations and how core concepts in research ethics and regulatory definitions apply in the unique context of advancing AI. Through these modules, learners should walk away with a deeper understanding of the current ethical and regulatory issues related to the use of advancing technology in human subjects research. It is our hope that all involved in the regulatory and ethical landscape of AI research will be able to see these modules as groundwork that enables individuals and institutions to begin to share a consistent understanding of the core concepts, regulatory definitions, and ethical concerns associated with AI HSR and utilize this information to build upon the development of their own standard operating procedures and reviewer checklists.