Add this webinar to your organization’s subscription.

Learn more about this webinar.


The increasing interest in research involving artificial intelligence (AI) adds a new dimension to individuals who work with human subjects and those responsible for human subject protections. 

This webinar explores the current protections, regulatory elements, and ethics tools associated with protecting human subjects in light of AI research, including a discussion of their current limitations.  The webinar concludes with some suggestions regarding shaping policy of and providing education on AI research with human subjects.

Learning Objectives

  • Review the current regulatory framework for human subject protections and its applicability to research involving AI.
  • Identify existing protections, ethics tools, and their limitations for human subjects in AI research.
  • Describe approaches to shape policy and provide training on AI research that involves human subjects.


Institutional Review Board (IRB) Members, Researchers, Research Team Members, Students, IRB Administrators and Staff, Human Subject Protection Staff

Meet the Presenters

Cansu Canca, PhD – AI Ethics Lab
Cansu is a philosopher and the founder & director of the AI Ethics Lab. She has a PhD in philosophy specializing in applied ethics. She works on ethics of technology and public health ethics. Previously, she was a lecturer at the University of Hong Kong, and a researcher at the Harvard Law School, Harvard School of Public Health, Harvard Medical School, National University of Singapore, Osaka University, and the World Health Organization.

Tamiko Eto, MS, CIP – Stanford Research Institute (SRI) International, Office of Research Integrity
Tamiko is responsible for the administrative leadership and direction of all operational aspects of SRI’s Human Research Protection Program (HRPP). She works closely with SRI AI investigators and administrative leadership on addressing the unique ethical challenges around AI, and how all that fits into the regulatory landscape.