Difference between revisions of "Rethinking Higher Education/Chapter 1"
| Line 1: | Line 1: | ||
| + | <div style="background-color: #e3f2fd; border: 1px solid #2196F3; padding: 8px; margin: 0 0 15px 0; border-radius: 4px; font-size: 0.9em;"> | ||
| + | [[Rethinking_Higher_Education/Chapter_1|EN]] · [[Rethinking_Higher_Education/Chapter_1/zh|ZH]] · [[Rethinking_Higher_Education/Chapter_1/en-zh|EN-ZH]] · [[Rethinking_Higher_Education|← Book]] | ||
| + | </div> | ||
| + | |||
== Ethical Frameworks for AI in Higher Education: Between European Regulation and Chinese Innovation == | == Ethical Frameworks for AI in Higher Education: Between European Regulation and Chinese Innovation == | ||
Revision as of 05:53, 8 April 2026
Ethical Frameworks for AI in Higher Education: Between European Regulation and Chinese Innovation
Martin Woesler
Hunan Normal University
Abstract
The rapid integration of artificial intelligence into higher education has outpaced the development of ethical and regulatory frameworks to govern it. This article provides a systematic comparison of the European Union‘s AI Act — the world’s first comprehensive AI legislation — and China‘s evolving sector-specific approach to AI governance in education. The EU AI Act classifies educational AI applications such as automated grading, proctoring, and admissions decisions as „high-risk,“ bans emotion recognition in educational settings, and mandates AI literacy for all deployers. China’s approach, by contrast, combines the 2023 Interim Measures for Generative AI Services with Ministry of Education directives, mandatory AI education from September 2025, and institution-specific guidelines such as Fudan University’s January 2026 policy. Drawing on empirical data from the EDUCAUSE 2024 AI Landscape Study, the HEPI 2025 UK student survey, and comparative legal analyses, we document a significant „readiness gap“ in both systems: 80 percent of faculty use AI tools, yet fewer than one in four are aware of their institution’s AI policy. We argue that neither the EU’s horizontal, rights-based approach nor China’s centralized, sector-specific model is sufficient alone. A synthesis combining European ethical rigour with Chinese implementation speed offers the most promising path toward responsible AI integration in higher education.
Keywords: AI ethics, EU AI Act, higher education, China AI governance, AI literacy, academic integrity, proctoring, Brussels Effect, generative AI policy
1. Introduction
On 1 August 2024, the European Union‘s Artificial Intelligence Act entered into force — the world’s first comprehensive legislation governing artificial intelligence. Among its most consequential provisions for higher education are the classification of educational AI systems as „high-risk,“ the prohibition of emotion recognition in schools and universities, and the mandate that all organizations deploying AI systems ensure adequate AI literacy among their staff. These provisions, which took effect in stages beginning February 2025, establish a regulatory framework without precedent in any other jurisdiction.
Meanwhile, China has pursued a different path. Rather than a single horizontal law governing all AI applications, China has developed a series of sector-specific regulations — the 2023 Interim Measures for the Management of Generative AI Services, Ministry of Education directives prohibiting AI-submitted academic work, and the September 2025 mandate for AI education in all primary and secondary schools — that collectively govern AI in education through a combination of national policy, institutional guidelines, and technical standards.
This article compares these two approaches to AI governance in education, examining their philosophical foundations, practical implications, and effectiveness in addressing the ethical challenges that AI poses for higher education. It contributes to the broader anthology project by connecting the regulatory dimension with the companion chapters on AI in language learning, data protection, and the university of the future (Woesler, this volume).
Three questions structure our inquiry. First, how do the EU and Chinese regulatory frameworks address the specific ethical challenges of AI in education — academic integrity, surveillance, fairness, and transparency? Second, what is the current state of institutional readiness for AI governance in both systems? Third, what can each system learn from the other?
2. The EU AI Act and Education
2.1 High-Risk Classification
The EU AI Act (Regulation 2024/1689) establishes a risk-based framework that categorizes AI systems into four tiers: unacceptable risk (prohibited), high risk (regulated), limited risk (transparency obligations), and minimal risk (unregulated). Education figures prominently in the high-risk category. Annex III, Section 3 identifies four categories of educational AI as high-risk: AI systems used to determine access to educational institutions, AI systems evaluating learning outcomes including systems used to steer the learning process, AI systems assessing the appropriate level of education for an individual, and AI systems monitoring and detecting prohibited behaviour of students during tests (European Parliament and Council 2024).
The practical consequences are significant. Any university deploying an AI system that grades examinations, recommends study paths, determines admissions, or monitors student behaviour during assessments must comply with extensive obligations: conducting conformity assessments, maintaining technical documentation, implementing human oversight mechanisms, and ensuring data quality. These requirements apply regardless of whether the AI system is developed in-house or procured from a commercial provider.
2.2 The Emotion Recognition Ban
Article 5(1)(f) of the AI Act prohibits AI systems that infer emotions of natural persons in the areas of workplaces and educational institutions, except where the AI system is intended to be put into service for medical or safety reasons. This prohibition, which took effect on 2 February 2025, has direct implications for AI-powered proctoring systems that monitor facial expressions, eye movements, and other biometric indicators to detect cheating during examinations. Several commercial proctoring platforms that rely on emotion inference — interpreting nervousness, distraction, or stress as indicators of academic dishonesty — may fall within the scope of this prohibition.
The emotion recognition ban reflects a distinctly European approach to the relationship between technology and human dignity. The EU’s position is that monitoring emotional states in educational contexts constitutes an unacceptable intrusion into students’ psychological integrity, regardless of the stated purpose. This position is informed by the Charter of Fundamental Rights of the European Union, which guarantees the right to human dignity (Article 1) and the right to the protection of personal data (Article 8).
2.3 The AI Literacy Mandate
Article 4 of the AI Act requires that providers and deployers of AI systems ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy. This obligation entered into force on 2 February 2025 — before most other provisions of the Act — signalling the EU’s view that AI literacy is a prerequisite for responsible AI governance.
For universities, this mandate has two dimensions. First, as deployers of AI systems (learning management systems, automated grading tools, plagiarism detection software, proctoring platforms), universities must ensure that faculty and administrative staff possess adequate AI literacy to use these tools responsibly. Second, as educational institutions, universities have a broader mission to develop AI literacy among their students — a mission that the AI Act’s literacy mandate reinforces without explicitly addressing.
The implementation challenge is substantial. The EDUCAUSE 2024 AI Landscape Study found that fewer than one in four faculty and staff were aware of their institution’s formal AI policy, even though 80 percent reported using AI tools in their work (EDUCAUSE 2024). This gap between AI use and AI awareness represents a significant institutional risk — and a direct challenge to the AI Act’s literacy mandate.
3. China‘s AI Governance for Education
3.1 The Interim Measures for Generative AI (2023)
China‘s approach to AI governance in education operates through a layered system of national regulations, ministerial directives, and institutional policies. The foundational national regulation is the Interim Measures for the Management of Generative AI Services (生成式人工智能服务管理暂行办法), issued jointly by the Cyberspace Administration of China and six other agencies including the Ministry of Education, effective 15 August 2023. The Measures adopt what Chinese legal scholars describe as an „inclusive prudence“ (包容审慎) approach, combining development promotion with classified supervision (Migliorini 2024).
For education specifically, the Interim Measures include provisions requiring that generative AI service providers prevent minors from developing „excessive reliance on or addiction to“ generative AI, and that services targeting minors meet additional safety requirements. Notably, the Measures include a research exemption that permits universities and research institutes to develop and test generative AI systems without the full registration and compliance requirements applicable to commercial providers — a pragmatic recognition that overly restrictive regulation could impede academic research.
3.2 Ministry of Education Directives
The Ministry of Education has issued several directives specifically addressing AI in education. Most significantly, the Ministry has prohibited students from submitting AI-generated content as their own academic work — a directive that, while clear in principle, faces the same enforcement challenges encountered in other jurisdictions. Individual universities have supplemented this directive with institution-specific policies.
Fudan University became one of the first major Chinese universities to issue comprehensive AI guidelines in January 2026, establishing six specific prohibitions for thesis writing: banning the use of AI to generate or alter original data, experimental results, images, or thesis text; prohibiting AI tools in the thesis review process; and banning AI tools for language polishing and translation of theses. Penalties for severe violations include degree revocation (China Daily 2026).
Tianjin University of Science and Technology (天津科技大学) took a different approach in 2024, mandating that AI-generated content in undergraduate theses must not exceed 40 percent — a quantitative threshold that is measurable in principle but difficult to enforce in practice, particularly for texts that have been substantially rewritten by their human authors.
3.3 Mandatory AI Education from September 2025
China‘s most sweeping policy intervention is the mandate, effective 1 September 2025, that AI education be incorporated into all primary and secondary school curricula. The policy requires at least eight hours of AI education annually for students at all grade levels, including children as young as six. At the primary level, the focus is on AI literacy and exposure; at the junior high school level, on logic and critical thinking; and at the senior high school level, on applied innovation and algorithm design (Asia Education Review 2025).
This mandate has implications for higher education because it means that, within a few years, university students will arrive with a baseline of AI literacy — unlike the current generation, most of whom encounter AI tools for the first time at university. Chinese universities must therefore prepare to build on this foundation rather than starting from scratch, which will require fundamental curriculum redesign.
Simultaneously, the policy prohibits students at primary schools from independently using „open-ended content generation“ tools and bans teachers from using generative AI as a substitute for core teaching activities. This dual approach — promoting AI literacy while restricting certain AI uses — reflects a nuanced understanding that AI competence and AI dependence are distinct phenomena that require distinct policy responses.
4. Horizontal vs. Sector-Specific Regulation
4.1 The EU’s Horizontal Approach
The EU AI Act represents a horizontal regulatory strategy: a single legislative framework governing AI across all sectors, including education. The advantages of this approach include consistency (the same principles apply to AI in healthcare, employment, law enforcement, and education), legal certainty (organizations need to understand one framework rather than many), and comprehensiveness (no sector falls through the regulatory cracks).
However, horizontal regulation also has disadvantages. The AI Act’s provisions are necessarily abstract — „high-risk“ classification criteria must accommodate AI systems as diverse as medical diagnostic tools and university admissions algorithms. This abstraction can make compliance challenging for specific sectors. Universities, accustomed to the relatively informal governance structures of academic life, may struggle to implement the formal conformity assessments, technical documentation requirements, and quality management systems that the AI Act demands of high-risk AI deployers.
4.2 China‘s Sector-Specific Approach
China‘s approach, by contrast, is predominantly sector-specific. Rather than a single comprehensive AI law, China has developed a series of targeted regulations addressing specific AI capabilities: the Deep Synthesis Provisions (2023) governing deepfakes, the Interim Measures for Generative AI (2023), the Algorithmic Recommendation Management Provisions (2022), and sector-specific directives from relevant ministries including the Ministry of Education.
As O’Shaughnessy and Sheehan (2023) observe in their analysis for the Carnegie Endowment for International Peace, „the EU’s AI Act leans toward a horizontal approach (one framework across all sectors), while China‘s algorithm regulations incline vertically (sector-specific laws targeting specific AI capabilities).“ Both approaches face trade-offs: horizontal regulation provides consistency but risks being too vague; vertical regulation provides specificity but risks gaps and overlaps between sectors.
For education specifically, China‘s sector-specific approach has the advantage of allowing the Ministry of Education to issue directives tailored to educational contexts — distinguishing, for example, between AI use in research (broadly permitted) and AI use in examinations (broadly prohibited). The disadvantage is regulatory fragmentation: university administrators must navigate multiple overlapping regulations without a single authoritative framework.
4.3 Comparative Assessment
A comprehensive comparative analysis of global AI regulation found that „many horizontal laws take a risk-based approach imposing stringent obligations on highest-risk systems, though definitions of ‘high-risk’ are inconsistent across jurisdictions“ (Royal Society Open Science 2025). The study confirmed that the EU favours a horizontal risk-based approach while China has favoured sector-specific laws tailored to specific use-cases, with neither approach clearly superior.
Chun, Schroeder de Witt, and Elkins (2024) characterize the fundamental difference as one of values: „The EU’s approach is characterised by commitment to fundamental rights, risk-based categorisation, and ethical oversight; China adopts a centralized, state-led approach integrating AI development with national objectives and social governance.“ For education, this means that EU regulation prioritizes protecting students’ rights (privacy, non-discrimination, human oversight), while Chinese regulation prioritizes both protecting students and advancing national AI competitiveness — goals that can converge but also conflict.
5. The Readiness Gap
5.1 Faculty Awareness and Policy Implementation
The most significant challenge facing both regulatory systems is not the adequacy of regulation itself but the gap between regulation and practice. Empirical data from multiple jurisdictions reveal a consistent pattern: faculty and students are using AI tools far more rapidly than institutions are developing policies to govern their use.
The EDUCAUSE 2024 AI Landscape Study, based on a survey of over 900 higher education technology professionals, found that only 23 percent of institutions had AI-related acceptable use policies in place. Nearly half (48 percent) of respondents disagreed that their institution has appropriate AI policies for ethical decision-making, and 54 percent disagreed that their institution has an effective AI governance mechanism (EDUCAUSE 2024). Yet 80 percent of faculty and staff reported using AI tools, and fewer than one in four were aware of a formal institutional AI policy (EDUCAUSE 2024 Action Plan).
This gap is not unique to the United States. In the UK, the Higher Education Policy Institute (HEPI) 2025 survey found that 92 percent of undergraduate students now use generative AI in some form, up from 66 percent in 2024 — a dramatic year-on-year increase demonstrating the rapid normalization of AI tool usage (HEPI 2025). Yet institutional policies have not kept pace with this adoption.
5.2 The Chinese Readiness Landscape
In China, Liu et al. (2025) investigated regulations, technology policies, and university attitudes to AI based on policy document analysis and 33 faculty interviews at Chinese research universities. They found that while faculty generally view AI as enhancing personalization, research productivity, and administrative efficiency, they raise concerns about academic integrity, algorithmic bias, and overreliance on technology. Crucially, China’s state-led governance model „is interpreted and enacted unevenly on university campuses“ — even centralized directives are implemented inconsistently at the institutional level (Liu et al. 2025).
A comparative analysis of AI regulation across mainland China, Hong Kong, and Macau found that even within a single country, „varied governance structures produce divergent approaches to AI in education„ — demonstrating that regulatory fragmentation exists not only between the EU and China but within China itself (Liu et al. 2025b).
5.3 Emerging Institutional Responses
Despite the readiness gap, some institutions are developing innovative responses. In Austria, the HEAT-AI (Higher Education Act for AI) framework, developed at a University of Applied Sciences, categorizes AI applications in higher education into four risk levels mirroring the EU AI Act — unacceptable, high, limited, and minimal risk — and provides institution-specific guidance for each category (Temper, Tjoa, and David 2025). The framework went live in September 2024 and represents an attempt to translate the abstract provisions of the AI Act into concrete institutional practice.
A global study of emerging GenAI policies found that „only slightly more than half of examined institutions had publicly accessible GenAI guidelines, with most lacking formal policy at the institutional level“ (Aristombayeva et al. 2025). Policies addressing students and faculty are more prevalent than those addressing researchers — a gap that is particularly significant for research universities where AI is used extensively in the research process.
6. Surveillance vs. Support: The Proctoring Debate
6.1 AI Proctoring in the EU Framework
The EU AI Act‘s classification of AI systems that monitor student behaviour during tests as „high-risk,“ combined with the emotion recognition ban, creates a regulatory environment that is significantly more restrictive than any other jurisdiction. AI proctoring systems that rely on facial expression analysis, gaze tracking, or keystroke dynamics to infer cheating must comply with the Act’s full set of high-risk obligations — or, if they involve emotion inference, may be prohibited entirely.
A systematic review of AI-based proctoring systems identified security and privacy concerns, ethical concerns, trust in AI-based technology, lack of training among users, and cost as the major issues (Nigam et al. 2021). The review noted that „security issues associated with AI-based proctoring systems are multiplying and are a cause of legitimate concern.“ These concerns align with the EU’s precautionary approach.
6.2 AI Surveillance in Chinese Education
In China, the approach to AI-powered monitoring in education is markedly different. A comparative analysis of AI privacy concerns in higher education found that „Western outlets highlight individual privacy rights and controversies in remote exam monitoring, while Chinese coverage more frequently addresses AI-driven educational innovation“ (Xue et al. 2025). AI-powered surveillance systems in some Chinese universities — monitoring student behaviour, classroom engagement, and even facial expressions — have generated domestic debate, but the regulatory framework is more permissive than in the EU.
The contrast reflects fundamentally different cultural and legal traditions regarding the relationship between individual privacy and collective benefit. In the EU framework, student monitoring is an intrusion that must be justified by compelling necessity and constrained by proportionality. In the Chinese framework, monitoring is more readily accepted as a component of educational management, provided it serves institutional and national objectives.
6.3 The Pedagogical Question
Beyond the legal and cultural dimensions, the proctoring debate raises a fundamental pedagogical question: does monitoring student behaviour during examinations improve educational outcomes, or does it merely optimize for compliance while undermining trust, intrinsic motivation, and the development of academic integrity as a personal value?
The answer likely depends on context. In large-scale standardized assessments where the consequences of cheating are significant (professional licensing examinations, university entrance tests), some form of monitoring may be justifiable. In formative assessments designed to support learning rather than certify competence, monitoring may be counterproductive — replacing the educational relationship of trust between teacher and student with a surveillance relationship that undermines the very values it purports to protect.
7. The „Brussels Effect„ and Global AI Governance
7.1 The Brussels Effect in AI Regulation
The concept of the „Brussels Effect,“ theorized by Anu Bradford (2020), describes the EU’s capacity to unilaterally regulate global markets through stringent domestic regulation that multinational companies adopt worldwide for cost efficiency. The question of whether the AI Act will produce a Brussels Effect is actively debated.
Siegmann and Anderljung (2022) argue that „both de facto and de jure Brussels Effects are likely for parts of the EU AI regulatory regime,“ particularly for large US technology companies whose AI systems are classified as high-risk. For education, this means that AI tools developed by global companies (Microsoft, Google, OpenAI) for use in European universities will likely be designed to comply with the AI Act’s requirements — and that these compliance features may be exported to other markets, including China.
However, Almada and Radu (2024) caution that the AI Act may also produce a „Brussels Side-Effect“: other jurisdictions may adopt the regulatory form (risk-based classification, documentation requirements) without the underlying rights-based substance that gives the EU framework its ethical force.
7.2 China‘s Global AI Governance Initiative
China has responded to the EU’s regulatory activism with its own vision for global AI governance. The Global AI Governance Initiative (全球人工智能治理倡议), announced by President Xi Jinping in October 2023, advocates a „people-centered approach“ to AI governance and calls for „mutual respect, equality, and mutual benefit“ among nations in AI development. As Racicot and Simpson (2024) note, the Initiative „should be seen in the context of China’s broader foreign policy objectives: its commitments to AI governance serve China’s geopolitical ambitions, including positioning itself as the standard-setter for developing nations.“
For higher education, the competition between the EU’s Brussels Effect and China‘s Global AI Governance Initiative creates a complex regulatory landscape. International universities — particularly those with joint programmes spanning EU and Chinese institutions, such as those within the Jean Monnet Centre of Excellence network — must navigate both frameworks simultaneously. The companion chapter on data protection (Woesler, this volume) examines the practical challenges of dual compliance in detail.
8. Towards Responsible AI Integration: Recommendations
Based on the comparative analysis presented in this article, we offer the following recommendations for universities navigating the emerging AI governance landscape.
First, develop institution-specific AI policies that translate abstract regulatory requirements into concrete guidance for faculty and students. The HEAT-AI framework (Temper, Tjoa, and David 2025) provides a useful model for institutions seeking to operationalize risk-based AI governance at the institutional level.
Second, invest in AI literacy as a core institutional competency — not only for students but for faculty and administrative staff. The EU AI Act‘s literacy mandate (Article 4) provides a regulatory impetus, but the pedagogical imperative exists independent of regulation: faculty who do not understand the AI tools their students use cannot teach or assess effectively.
Third, distinguish between AI governance for research and AI governance for assessment. The Chinese approach, which broadly permits AI use in research while restricting it in examinations and theses, provides a pragmatic distinction that European institutions would benefit from adopting. Research and assessment serve different purposes and face different risks; a single AI policy cannot adequately address both.
Fourth, design assessment methods that evaluate distinctly human contributions — critical analysis, original thinking, creative synthesis, ethical reasoning — rather than relying on AI detection tools whose accuracy is contested and whose use may undermine trust between teachers and students.
Fifth, engage in cross-border dialogue. The EU-China comparison reveals that each system has developed insights that the other lacks. European institutions can learn from China‘s speed of implementation and its integration of AI competencies across all disciplines. Chinese institutions can learn from the EU’s emphasis on ethical frameworks, faculty governance, and the protection of student rights. Joint programmes, exchange visits, and collaborative research — such as those facilitated by the Jean Monnet Centre of Excellence programme — provide concrete mechanisms for this dialogue.
9. Conclusion
The governance of AI in higher education is a challenge that no single regulatory framework has yet adequately addressed. The EU AI Act provides the most comprehensive legal framework, but its abstract provisions require institutional translation, and its implementation faces a significant readiness gap. China‘s sector-specific approach provides more targeted guidance for educational contexts, but its effectiveness depends on consistent institutional implementation and risks fragmentation across regulatory instruments.
The most promising path forward combines elements of both approaches: the EU’s commitment to fundamental rights, risk-based classification, and ethical oversight; and China‘s willingness to mandate AI literacy at scale, its pragmatic distinction between research and assessment contexts, and its speed of institutional adaptation. Neither system can learn everything from the other — the cultural, political, and legal differences are too profound. But each can learn something, and the dialogue between them is itself a valuable contribution to the global challenge of governing AI in education responsibly.
As UNESCO’s Guidance for Generative AI in Education and Research (Miao and Holmes 2023) reminds us, „the absence of national GenAI regulations in most countries leaves user data privacy unprotected and educational institutions largely unprepared.“ The EU and China are among the few jurisdictions that have moved beyond this absence toward substantive governance frameworks. Their experiences — successes and failures alike — will shape how the rest of the world approaches the challenge of AI in higher education.
Acknowledgments
This research was supported by the Jean Monnet Centre of Excellence „EU-Studies Centre: Digitalization in Europe and China„ (EUSC-DEC), funded by the European Union under Grant Agreement No. 101126782. Views and opinions expressed are those of the author only and do not necessarily reflect those of the European Union.
References
Almada, M., & Radu, A. (2024). The Brussels Side-Effect: How the AI Act can reduce the global reach of EU policy. German Law Journal, 25, 646–663.
Aristombayeva, M., Satybaldiyeva, R., Maung, B. M., & Lee, D. (2025). Guiding the uncharted: The emerging (and missing) policies on generative AI in higher education. Frontiers in Education, 10. DOI: 10.3389/feduc.2025.1644081.
Asia Education Review. (2025). China makes AI education mandatory in schools starting September 2025.
Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
China Daily. (2026, January 12). Fudan University sets AI education guidelines.
Chun, J., Schroeder de Witt, C., & Elkins, K. (2024). Comparative global AI regulation: Policy perspectives from the EU, China, and the US. arXiv preprint, arXiv:2410.21279.
EDUCAUSE. (2024). 2024 EDUCAUSE AI Landscape Study. Robert, J., & McCormack, M.
EDUCAUSE. (2024). 2024 EDUCAUSE Action Plan: AI Policies and Guidelines.
European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
Freeman, J. (2025). Student Generative AI Survey 2025. HEPI Policy Note 61. Higher Education Policy Institute.
Liu, X., et al. (2025). Regulations, technology policies and universities’ attitudes to artificial intelligence in China. Higher Education Quarterly, 79(4). DOI: 10.1111/hequ.70055.
Miao, F., & Holmes, W. (2023). Guidance for Generative AI in Education and Research. UNESCO: Paris.
Migliorini, S. (2024). China‘s Interim Measures on generative AI: Origin, content and significance. Computer Law and Security Review. 53, Article 105992.
Nigam, A., Pasricha, R., Singh, T., & Churi, P. (2021). A systematic review on AI-based proctoring systems. Education and Information Technologies, 26, 6421–6445.
O’Shaughnessy, M., & Sheehan, M. (2023). Lessons from the world’s two experiments in AI governance. Carnegie Endowment for International Peace.
Racicot, R., & Simpson, K. H. (2024). China‘s AI Governance Initiative and its geopolitical ambitions. Centre for International Governance Innovation (CIGI).
Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation. AI and Society, 36, 59–77.
Hilliard, A., Gulley, A., Kazim, E., & Koshiyama, A. (2026). Artificial intelligence policy worldwide: A comparative analysis. 13(2), 242234.
Siegmann, C., & Anderljung, M. (2022). The Brussels Effect and Artificial Intelligence. arXiv preprint, arXiv:2208.12645.
Temper, M., Tjoa, S., & David, L. (2025). Higher Education Act for AI (HEAT-AI): A framework to regulate the usage of AI in higher education institutions. Frontiers in Education, 10. DOI: 10.3389/feduc.2025.1505370.
Woesler, M. (this volume). Learning a foreign language with and without AI: An empirical comparative study.
Woesler, M. (this volume). Student data protection in the digital university: GDPR and China‘s PIPL compared.
Woesler, M. (this volume). University of the future: AI-enhanced higher education between European humanism and Chinese innovation.
Xue, Y., Chinapah, V., & Zhu, C. (2025). A Comparative Analysis of AI Privacy Concerns in Higher Education: News Coverage in China and Western Countries. Education Sciences, 15(6), 650. DOI: 10.3390/educsci15060650