The National Institutes of Health (NIH) has issued an important notice aimed at maintaining the integrity and confidentiality of its peer review process. This new directive builds upon the existing guidance in NOT-OD-22-044, which outlines the rules, responsibilities, and possible consequences associated with NIH peer reviews. Specifically, this notice introduces a clear prohibition on the use of natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies in the peer review process.
Why This Matters: The Role of Confidentiality in Peer Review
Peer review is a cornerstone of the scientific research process, ensuring that grant applications and R&D contract proposals are evaluated fairly and rigorously. Confidentiality is critical to this process, allowing reviewers to share candid opinions and evaluations without fear of unauthorized disclosure or misuse of the information. When reviewers analyze a proposal, they are entrusted with privileged information that must remain secure.
The use of AI tools in this context raises significant concerns. These technologies often require detailed input data to generate critiques or analyses. However, where this data goes and how it is stored or used by AI systems is largely opaque. This lack of transparency poses a substantial risk to the confidentiality and integrity of the NIH peer review process.
Key Provisions of the Notice: What Reviewers Need to Know
The NIH’s notice makes several key clarifications:
- Prohibition on AI Tools: NIH explicitly prohibits peer reviewers from using AI tools to analyze and formulate critiques of grant applications and R&D contract proposals. This prohibition is in place to prevent any potential breaches of confidentiality and to protect the integrity of the peer review process.
- Updated Confidentiality Agreements: To reflect this new rule, NIH is revising its Security, Confidentiality, and Non-disclosure Agreements for Peer Reviewers. These updated agreements will clearly state that the use of AI tools in the peer review process is not allowed.
- Consequences for Violations: Reviewers are reminded that uploading or sharing content from an NIH grant application or contract proposal to online AI tools is a violation of NIH’s confidentiality requirements. Such actions could have serious consequences, including disqualification from the peer review process.
Implementation and Broader Impact
Moving forward, all NIH Peer Reviewers will be required to sign a modified Security, Confidentiality, and Nondisclosure Agreement before participating in the review process. This agreement will affirm their understanding of the prohibition on using AI tools and their commitment to upholding the confidentiality of the review process.
Moreover, NIH is extending this policy beyond peer reviewers to include members of NIH National Advisory Councils and Boards. These individuals will also be required to certify similar agreements, reinforcing the importance of maintaining confidentiality across all levels of the NIH’s operations.
Conclusion
This latest notice from the NIH serves as a crucial reminder of the responsibilities that come with participating in the peer review process. By explicitly prohibiting the use of AI tools, the NIH is taking a strong stance on protecting the confidentiality and integrity of its reviews. As the landscape of scientific research evolves, it is imperative that the methods used to evaluate and support this research evolve as well—while safeguarding the fundamental principles of security and trust that underpin the entire system.