Back To Blog

NIH Seeks Public Input on Safeguarding Genomic Data in the Age of Generative AI

As generative AI tools continue to revolutionize biomedical research, the National Institutes of Health (NIH) urges the scientific and tech communities to weigh in on how best to balance innovation with privacy. In its latest Request for Information (RFI) [NOT-OD-25-118], released May 30, 2025, NIH calls for public comment on strategies to responsibly develop and share AI tools trained using human genomic data from controlled-access repositories.

Why This Matters

AI models, particularly generative ones, can inadvertently “memorize” and leak sensitive data, raising significant privacy risks for research participants. The stakes are high, with biomedical datasets often containing personal and genetic information.

Recognizing this, NIH has temporarily paused the sharing and retention of generative AI models developed using controlled-access genomic data (see NOT-OD-25-081). Now, NIH is looking to the community to help shape policies that allow AI progress while protecting participant data.

What NIH Wants to Know

NIH is seeking input on:

  • Risks of data leakage from generative AI models trained on human genomic datasets.
  • Privacy-preserving technologies, such as techniques to mitigate membership inference attacks (MIAs).
  • Additional mitigation strategies that could prevent unintended exposure of controlled-access data across the AI lifecycle.

Who Should Respond

The RFI is open to a wide audience, including AI developers, biomedical researchers, data custodians, institutions, and the general public. All perspectives are welcome.

How to Participate

Please submit your comments via the NIH comment form by July 16, 2025. Submissions can be anonymous, but respondents may include contact details.

Background and Resources

This RFI builds upon previous NIH data-sharing policies, including the 2014 Genomic Data Sharing Policy (NOT-OD-14-124) and the March 2025 guidance on Protecting Human Genomic Data (NOT-OD-25-081).

View the complete notice for more information, including examples of research on data leakage risk from generative AI models and associated mitigation methodologies.


icon of newsletter
Request for Information on Responsibly Developing and Sharing Generative Artificial Intelligence Tools Using NIH Controlled Access Data View NIH Notice