Overview
The Council of Graduate Schools (CGS) and the Institut national de la recherche scientifique (INRS) have jointly released an ambitious new Principles and Action Agenda to guide how graduate institutions worldwide can harness artificial intelligence (AI) responsibly, ethically, and effectively. During the 2025 Strategic Leaders Global Summit on Graduate Education, held from September 28 to 30, 2025, in Quebec City, Canada, leaders from 15 countries convened to address the opportunities and risks that AI presents for graduate education and develop the agenda.
This new agenda arrives at a pivotal moment: AI is rapidly reshaping research, teaching, assessment, and the labor market. The summit’s clear core mission is to ensure that AI elevates student success while preserving the human-centered values at the heart of graduate education.
Why AI Demands Leadership and Coordination
The document emphasizes that generative AI is a transformative force capable of accelerating innovation, advancing knowledge, and improving lives. However, without thoughtful governance, AI can also undermine academic integrity, amplify inequities, and threaten critical thinking skills.
Graduate students, in particular, stand at the nexus of these changes. Not only are they users of AI, but they are also future producers and shapers of these technologies. That reality places new responsibilities on institutions to provide ethical guidance, skill-building opportunities, and supportive infrastructures.
Seven Core Principles for Ethical and Effective AI Integration
The 2025 Principles outline a shared global framework for developing AI policies that align with the values of scholarship, equity, and human development. These principles include:
- Humanity – AI should enhance, not replace human intelligence, dignity, and agency.
- Autonomy – Individuals should maintain the choice of whether and how to use AI tools.
- Integrity – Institutions must establish and uphold clear ethical frameworks that guide the responsible use of AI.
- Equity & Fairness – AI initiatives must reduce, not reinforce inequities across disciplines, backgrounds, and institutions.
- Transparency – Students, faculty, and staff must understand when, where, and how to use AI models.
- Literacy – The university community must provide AI education and upskilling opportunities.
- Responsibility & Accountability – Users and institutions share responsibility for ensuring the benefits of AI while managing its risks.
These principles serve as a “living document,” updated as technologies, needs, and evidence evolve.
A Concrete Action Agenda for Graduate Institutions
CGS and INRS lay out a detailed Action Agenda designed to help universities implement these principles effectively.
For Institutions
- Align AI use with core mission and values related to research, teaching, and student success.
- Promote equitable access to AI tools both within and beyond campus.
- Create cross-campus AI committees to guide policy development and share findings.
- Develop clear guidelines for faculty, staff, and students on acceptable AI use.
- Deliver required AI literacy training covering ethics, technical function, and model-specific knowledge.
- Use surveys and data to inform evidence-based policy decisions.
- Establish robust legal frameworks for intellectual property, data privacy, and research integrity to ensure transparency and accountability.
These steps work together to build consistency, transparency, and trust at a time when AI norms vary widely across programs and institutions.
The INRS Perspective: A Case Study in Responsible AI Integration
The agenda also highlights work underway at INRS and across Quebec’s higher education landscape, where four core guiding principles underpin all graduate-level AI use: transparency, prior authorization, responsibility, and data protection.
INRS has implemented:
- Revised course outlines specifying permitted levels of AI use,
- Optional AI declaration forms, and
- Specialized training partnerships to strengthen AI literacy.
These efforts demonstrate how universities can practically support student success while upholding rigorous academic standards.
Why This Agenda Matters
CGS and INRS’s collaborative statement arrives at a time when global institutions are grappling with inconsistencies in policy, uneven access to AI, and concerns about integrity, privacy, and student well-being. The agenda reframes AI not simply as a risk but as a transformative opportunity that thoughtful, collective governance can unlock.