​
GenAI Development Programme
​Gen AI Collaborative Critical Praxis
Capital Normal University (CNU), Beijing, November 2025
Programme
The CNU Generative Artificial Intelligence (GenAI)/Large Language Model (LLM) Development Programme was a four-week, research-based initiative designed to develop GenAI Collaborative Critical Praxis - strategically integrating the use of LLMs into academic research and teaching practices at the College of Education, Capital Normal University, Beijing. Developed in response to the recognised need for institutional guidelines and structured support at CNU for both staff and students, the Programme aimed to foster creative dialogue and to create new research and teaching cultures.
The Programme was led by Professor Ken Spours, Distinguished Visiting Professor, CNU, Honorary Professor Tianjin University of Technology and Education, and Professor Emeritus at University College London, and Professor Liying Rong, College of Education, CNU.
​
​​
Key concepts
The Programme was grounded in a set of political-economy-ecology-technology concepts, providing an analytical lens for understanding GenAI. These included:
-
Human Intellect-Machine Intelligence Collaboration - exploring the role of LLMs as Machine Intelligence and as cognitive partners to extend the Human Intellect.
-
Progressive mediation of GenAI - moving beyond a simple binary of acceptance or rejection of new technologies and towards a third path of collaborative critical praxis (CCP), built through educational programmes, ethical frameworks, and layers of progressive mediation.
-
Developing Technological Organic Intellectuals - applying modern Marxist concepts related to advanced technology and the creation of 'progressive technological organic intellectuals' as key mediators.
-
A political-ec0nomy-ecology analysis - the role of geo-political competition between the US and China, the trajectory of China's technological development, and the significance of the GenAI open-source movement.
Programme Sessions (click on the icons to access Powerpoint presentations)
​
Session 1.
Navigating New LLM Frontiers - opportunities, risks and ethical working in the university and the wider Chinese context
Session 1 of the CNU LLM Development Programme introduces participants to the rapidly evolving landscape of LLMs, exploring both their transformative opportunities and the risks they pose for higher education, particularly within the Chinese context. The session outlines the aims of the programme, encouraging participants to share experiences and expectations, and to examine the 'LLM Paradox' of innovation versus ethical challenge. It also presents a framework of 'mediation layers'—from individual skills to institutional governance—to help universities navigate and shape GenAI development responsibly.
​​
​
​
​
​
​
Session 2.
Demystifying LLMs – from ‘black boxes’ to ‘white boxes’ - understanding the architecture of large language models
​
Session 2 of the Development Programme guides participants through the internal architecture and evolution of LLMs, explaining how they operate as predictive token‑based systems built on transformer structures, parallel processing, and self‑attention mechanisms. The session contrasts Chinese innovations such as DeepSeek with US models, highlighting strategic differences in scaling, alignment, multi‑modality, and mixture‑of‑experts design. It introduces 'Collaborative Critical Praxis 'as a methodology for users to understand and creatively control LLMs by deepening both their practical skills, technical insight and wider political‑economy‑ecology awareness. Through exploring system prompts, alignment methods, reward models, and the mechanisms that shape LLM behaviour, the session moves participants from viewing LLMs as opaque 'black boxes' to transparent 'white boxes,' laying essential foundations for effective human–machine collaboration.
​​
​
​
Session 3.
Collaborative Critical Praxis - the dialectic of the Human Intellect & Machine Intelligence
​
Session 3 explores the emerging dialectical relationship between Human Intellect and Machine Intelligence, introducing participants to the concept of Collaborative Critical Praxis (CCP) as a disciplined framework for developing a Fusion Intellect—where human ethical intent and analytical judgement are augmented by machine computation. The session outlines the six‑stage praxis cycle, from ideological grounding and prompt shaping to critique, refinement and trust‑based partnership, all while recognising the structural constraints imposed by system prompts and reward models. It examines different forms of human–LLM engineering, including prompt, context and dispositional approaches, and introduces Rapid Recursive Lamination as a method for iterative calibration and intellectual control. Through these elements, the session positions human–machine collaboration as a dynamic, politically aware practice aimed at producing stronger, more critical forms of combined knowledge.
​
​​
​
​
​
​
​​​​
Session 4.
Applying LLMs in academic research - co-construction, distributed cognition & relational attribution
​
Session 4 focuses on how large language models can be rigorously and ethically integrated into academic research through co‑construction, distributed cognition, and relational attribution. Participants explore how Collaborative Critical Praxis (CCP) provides a disciplined 'hard work' cycle for shaping machine outputs, reducing risks such as hallucination, ideological drift, and authorship erosion. The session demonstrates how LLMs can support each stage of the research process—from topic formulation to dissemination—while emphasising the need for human judgment, transparent methodological annexes, and clear recognition of machine labour. It also outlines how distributed cognition emerges from the interplay of human intentionality and machine computation, highlighting individual responsibility of 'relational attribution' to acknowledge the contribution of the Machine and institutional responsibilities in supporting safe, ethical and analytically robust human–machine research partnerships.
​
​
​
​
​
​
Session 5.
Using LLMs in higher education teaching & learning - opportunities and risks for staff, students & the university
​
Session 5 examines how large language models can be used responsibly and creatively in higher education teaching and learning, exploring opportunities for enhanced efficiency, differentiated instruction, curriculum modernisation and the development of higher‑order critical skills. The session analyses risks—including hallucination, ideological drift, student and staff deskilling, and threats to academic integrity—and provides structured guidance for integrating LLMs across curriculum design, material creation, assessment and student learning support. It introduces methodological annexes to ensure transparency and academic honesty, and frames both teacher and student use of LLMs within Collaborative Critical Praxis to ensure the human intellect always leads the educational process. Overall, the session positions LLMs as tools for co‑construction and cognitive partnership, requiring continuous audit, ethical oversight and dialogue between teachers, students and the machine.
​
​
​
​​
Session 6.
Political-economy-ecology analysis of GenAI systems
Session 6 introduces a political‑economy‑ecology framework for understanding generative AI as part of shifting global technological blocs, contrasting Computational Capitalism and Platform Capitalism 2.0 with emerging alternative, socialised GenAI systems. The session examines how LLM‑driven data‑generative models have reshaped power relations, economic structures, and cultural‑ideological narratives, and explores the possibility of building counter‑hegemonic GenAI ecosystems grounded in public value, democratic governance, open infrastructure, ecological sustainability and alliances of technological organic intellectuals. Participants analyse China’s distinctive approach—positioned as a progressive hybrid bloc seeking to balance innovation, strong regulatory regimes, sovereign AI infrastructure and ecological design principles—and consider the geopolitical pressures and compressed technological timelines shaping the future of GenAI.
​
​​​​
​Session 7.
Working Ethically with Large Language Models - towards guidelines for CNU
​
Session 7 summarises how LLMs can support research, writing, and learning while maintaining high standards of academic integrity by setting out clear distinctions between acceptable use, unacceptable use, and emerging grey areas, with a focus on fairness, transparency, responsibility, and rigorous verification. The session also begins shaping a draft CNU framework for ethical LLM use, highlighting the importance of digital literacy, responsible prompting, transparent declaration, and critical engagement understood as collaborative critical praxis. Participants are invited to co‑develop guidelines that help the university harness AI in ways that support ethical scholarship and innovative knowledge production.
​
​
​
​​
​
​
​
​Session 8.
Building university GenAI innovation ecosystems
​
​​Session 8 introduces a strategic, multi‑layered approach to developing institutional capacity in generative AI at Capital Normal University. Drawing on Chinese national, regional, institutional, and micro‑level innovation models, the session explores how universities can build mission‑led GenAI ecosystems that connect governance, curriculum reform, research, industry collaboration, infrastructure, and academic culture. Comparative insights from Fudan and Tsinghua highlight emerging best practices in AI literacy, ethics, AI‑for‑Science research, and global governance partnerships. The session concludes by outlining a provisional GenAI roadmap for CNU, including governance structures, digital resources, staff–student guidelines, external partnerships, and new development activities designed to position CNU as a leading hub for ethical and socially responsible GenAI innovation.
​
​
​​
​
​
​
​Outcomes
The Programme aims to create a framework of university guidelines for ethical LLM partnership working at CNU. It is also hoped that participants will form an ongoing LLM 'community of practice' to serve as a forum for continued refinement of CNU’s GenAI guidelines and the eventual development of a university GenAI innovation ecosystem.
​







