top of page

Socialised generative artificial intelligence 

The paradoxes of large language models in Chinese higher education

 

Overview

This article examines the use of large language models (LLMs) in Chinese higher education, highlighting the "danger/opportunity paradox" of this rapidly evolving technology.  The research, conducted through a case study at a Chinese university, reveals that while LLMs offer significant benefits, there is a need for more formal guidance on their use.​​

 

Widespread adoption

LLM usage is extensive among the academic staff in the study. In the first half of 2025, the Chinese LLM - DeepSeek V3 - emerged as a significant player, quickly becoming the most mentioned model among users in the case study institution.

​

Benefits in teaching, research and administration 

The use of LLMs is seen as a way to increase efficiency and access to information5. Academics are using them for tasks in teaching, research and administration.

​

Teaching - creating lesson plans, organizing course content, and developing teaching materials6. They also support personalized learning and critical thinking by allowing students to use the models to answer questions and then discuss the results.

 

Research - conducting preliminary literature reviews, generating new ideas and polishing academic papers​.

​

Administration - assising with planning and time management.

​

​

Challenges and Concerns

Despite the benefits, interviewees identified several challenges including issues of intellectual over-reliance, LLM reliability and the potential for amplifying societal prejudices.

​​

Academic integrity - there are fears of increased plagiarism and over-reliance on the technology, which could undermine academic fairness, critical thinking and problem-solving skills.

​

Bias and prejudices - interviewees were very much alive to the issues of potential bias and the amplification of social prejudices.

​​

​Hallucinations - academics have been concerned about the unreliability of LLM outputs and the need to verify information for formal academic work, although there was also a recognition that recent LLM versions are becoming more reliable.

​

Lack of progressive mediation - the study found that the relationship between users and the technology is not yet positively 'mediated', allowing existing dominant technological cultures to shape relationships with the new technology.  Interviewees expressed a need for more guidelines and institutional support to mitigate the dangers and amplify the positive potential of LLMs.

​

Increasing layers of progressive mediation

The article concludes that while the technology presents a complex set of opportunities and dangers, increasing layers of progressive mediation - guidelines, support and developing practitioner communities of practice - could be key first steps to addressing the GenAI paradox.

Screenshot 2025-09-24 at 13.33.03.png

Exploring the ‘problem space’ of unmediated relationships with a fast-evolving technology: the uses of Large Language Models (LLMs) in teaching and research in a Chinese university 

 

Liying Rong, Zhen Zhong and Ken Spours - 2025.

Socialised systems of Generative Artificial Intelligence 

​

The second article, a conceptual review, builds on the first by arguing that the GenAI paradox is rooted in the current dominant, vertically organized system of Platform Capitalism 2.0.

 

Drawing on neo-Gramscian theory, the publication proposes that GenAI is embedded within competing technological blocs.  While Platform Capitalism 2.0 represents the dominant bloc, the article proposes an alternative: a horizontally organized, multi-level socialised GenAI ecosystem.

 

This alternative system is envisioned to be mediated by progressive technological organic intellectuals and a technological general intellect.

 

The article employs Celia Lury's concept of "problem spaces" and "recomposed research problems" to explore pathways for transitioning to this new system. This approach is based on building multiple mediating layers of ethical purpose, democratic governance, and educative infrastructures between new technologies and their users. The analysis addresses the temporal challenges of transitioning in an era of "compressed technological time".

 

The paper also highlights the growing geopolitical competition in GenAI development, notably the rivalry between the US and China. It notes China's strategic commitment to LLM development and its fracturing effect on the global Platform Capitalism 2.0 formation.

 

The article is a valuable contribution to understanding how GenAI can be transitioned from a tool of capitalist hegemony to a socialised system aligned with democratic and progressive principles.GenAI Architectural Dimension.

 

The publication addresses the GenAI architectural dimension within the context of Platform Capitalism 2.0, identifying a fundamental contradiction: a "data grab" driven by the needs of powerful Large Language Models (LLMs) is creating a new "data scarcity" of high-quality, clean information. 

 

To counter this, a socialised GenAI system would propose new models for shared, ethically sourced, high-quality, and community-contributed datasets. This would move beyond the extractive "data grab" mentality to foster a collaborative and democratically governed data ecosystem. A core part of this vision is the development of "data commons" , which are publicly accessible repositories of data, underpinned by technologies like federated learning , privacy-preserving AI techniques, and open data initiatives to ensure transparent use and collective ownership.

​​

​

​

​

Screenshot 2025-09-21 at 10.38.18.png

Ken Spours. 2025. Transitioning to socialised systems of generative artificial intelligence in an era of Platform Capitalism 2.0.  The role of technological organic intellectuals & the technological general intellect - Article submitted to MDPI journal Systems

© Prof. Ken Spours 2025 created with Wix.com

bottom of page