Rethinking Higher Education/Chapter 1/en-zh

From China Studies Wiki
< Rethinking Higher Education‎ | Chapter 1
Revision as of 08:06, 8 April 2026 by Maintenance script (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Language: EN · ZH · EN-ZH · ← Book

Chapter 1: Ethical Frameworks for AI in Higher Education

Martin Woesler

English (Source) 中文 (Target)
== Ethical Frameworks for AI in Higher Education: Between European Regulation and Chinese Innovation == == 高等教育中人工智能的伦理框架:欧洲监管与中国创新之间 ==
Martin Woesler Martin Woesler
Hunan Normal University 湖南师范大学
Abstract == 摘要 ==
The rapid integration of artificial intelligence into higher education has outpaced the development of ethical and regulatory frameworks to govern it. This article provides a systematic comparison of the European Union‘s AI Act — the world’s first comprehensive AI legislation — and China‘s evolving sector-specific approach to AI governance in education. The EU AI Act classifies educational AI applications such as automated grading, proctoring, and admissions decisions as „high-risk,“ bans emotion recognition in educational settings, and mandates AI literacy for all deployers. China’s approach, by contrast, combines the 2023 Interim Measures for Generative AI Services with Ministry of Education directives, mandatory AI education from September 2025, and institution-specific guidelines such as Fudan University’s January 2026 policy. Drawing on empirical data from the EDUCAUSE 2024 AI Landscape Study, the HEPI 2025 UK student survey, and comparative legal analyses, we document a significant „readiness gap“ in both systems: 80 percent of faculty use AI tools, yet fewer than one in four are aware of their institution’s AI policy. We argue that neither the EU’s horizontal, rights-based approach nor China’s centralized, sector-specific model is sufficient alone. A synthesis combining European ethical rigour with Chinese implementation speed offers the most promising path toward responsible AI integration in higher education. 人工智能在高等教育中的快速融合已超越了相应伦理和监管框架的发展速度。本文系统比较了欧盟《人工智能法》——世界上首部综合性人工智能立法——与中国在教育领域不断演进的行业特定人工智能治理方法。欧盟《人工智能法》将自动评分、监考和招生决策等教育人工智能应用归类为"高风险",禁止在教育环境中使用情绪识别技术,并要求所有部署者具备人工智能素养。相比之下,中国的方法结合了2023年《生成式人工智能服务管理暂行办法》、教育部指令、2025年9月起的强制性人工智能教育,以及复旦大学2026年1月政策等机构具体指南。基于EDUCAUSE 2024年人工智能状况研究、英国高等教育政策研究所(HEPI)2025年学生调查以及比较法律分析的实证数据,我们记录了两个体系中存在的显著"准备度差距":80%的教职人员使用人工智能工具,但不到四分之一的人了解其所在机构的人工智能政策。我们认为,无论是欧盟的横向权利导向方法还是中国的集中式行业特定模式,单独来看都不够充分。将欧洲的伦理严谨性与中国的实施速度相结合的综合方案,为高等教育中负责任的人工智能融合提供了最有前景的路径。
Keywords: AI ethics, EU AI Act, higher education, China AI governance, AI literacy, academic integrity, proctoring, Brussels Effect, generative AI policy 关键词:人工智能伦理、欧盟《人工智能法》、高等教育、中国人工智能治理、人工智能素养、学术诚信、监考、布鲁塞尔效应、生成式人工智能政策
1. Introduction == 1. 引言 ==
On 1 August 2024, the European Union‘s Artificial Intelligence Act entered into force — the world’s first comprehensive legislation governing artificial intelligence. Among its most consequential provisions for higher education are the classification of educational AI systems as „high-risk,“ the prohibition of emotion recognition in schools and universities, and the mandate that all organizations deploying AI systems ensure adequate AI literacy among their staff. These provisions, which took effect in stages beginning February 2025, establish a regulatory framework without precedent in any other jurisdiction. 2024年8月1日,欧盟《人工智能法》正式生效——这是世界上首部全面规范人工智能的综合性法律。对高等教育影响最为深远的条款包括:将教育人工智能系统归类为"高风险"、禁止在学校和大学中使用情绪识别技术,以及要求所有部署人工智能系统的组织确保其工作人员具备充分的人工智能素养。这些从2025年2月开始分阶段生效的条款,建立了一个在其他任何管辖区都没有先例的监管框架。
Meanwhile, China has pursued a different path. Rather than a single horizontal law governing all AI applications, China has developed a series of sector-specific regulations — the 2023 Interim Measures for the Management of Generative AI Services, Ministry of Education directives prohibiting AI-submitted academic work, and the September 2025 mandate for AI education in all primary and secondary schools — that collectively govern AI in education through a combination of national policy, institutional guidelines, and technical standards. 与此同时,中国走上了一条不同的道路。中国没有制定一部涵盖所有人工智能应用的横向法律,而是发展了一系列行业特定法规——2023年《生成式人工智能服务管理暂行办法》、教育部禁止提交人工智能生成学术作品的指令,以及2025年9月起在所有中小学课程中纳入人工智能教育的强制令——通过国家政策、机构指南和技术标准的结合来共同规范教育领域的人工智能应用。
This article compares these two approaches to AI governance in education, examining their philosophical foundations, practical implications, and effectiveness in addressing the ethical challenges that AI poses for higher education. It contributes to the broader anthology project by connecting the regulatory dimension with the companion chapters on AI in language learning, data protection, and the university of the future (Woesler, this volume). 本文比较了这两种人工智能教育治理方法,审视其哲学基础、实践影响以及在应对人工智能给高等教育带来的伦理挑战方面的有效性。本文通过将监管维度与人工智能语言学习、数据保护和未来大学等相关章节联系起来,为更广泛的论文集做出贡献(Woesler,本卷)。
Three questions structure our inquiry. First, how do the EU and Chinese regulatory frameworks address the specific ethical challenges of AI in education — academic integrity, surveillance, fairness, and transparency? Second, what is the current state of institutional readiness for AI governance in both systems? Third, what can each system learn from the other? 三个问题构成了我们的研究框架。第一,欧盟和中国的监管框架如何应对人工智能在教育中的具体伦理挑战——学术诚信、监控、公平和透明度?第二,两个体系中机构准备度的现状如何?第三,每个体系能从另一个体系中学到什么?
2. The EU AI Act and Education == 2. 欧盟《人工智能法》与教育 ==
2.1 High-Risk Classification === 2.1 高风险分类 ===
The EU AI Act (Regulation 2024/1689) establishes a risk-based framework that categorizes AI systems into four tiers: unacceptable risk (prohibited), high risk (regulated), limited risk (transparency obligations), and minimal risk (unregulated). Education figures prominently in the high-risk category. Annex III, Section 3 identifies four categories of educational AI as high-risk: AI systems used to determine access to educational institutions, AI systems evaluating learning outcomes including systems used to steer the learning process, AI systems assessing the appropriate level of education for an individual, and AI systems monitoring and detecting prohibited behaviour of students during tests (European Parliament and Council 2024). 欧盟《人工智能法》(第2024/1689号条例)建立了一个基于风险的框架,将人工智能系统分为四个等级:不可接受风险(禁止)、高风险(受监管)、有限风险(透明度义务)和最低风险(不受监管)。教育在高风险类别中占据突出位置。附件三第三节确定了四类高风险教育人工智能:用于决定教育机构准入的人工智能系统、用于评估学习成果(包括用于引导学习过程的系统)的人工智能系统、用于评估个人适当教育水平的人工智能系统,以及在考试期间监控和检测学生违禁行为的人工智能系统(European Parliament and Council 2024)。
The practical consequences are significant. Any university deploying an AI system that grades examinations, recommends study paths, determines admissions, or monitors student behaviour during assessments must comply with extensive obligations: conducting conformity assessments, maintaining technical documentation, implementing human oversight mechanisms, and ensuring data quality. These requirements apply regardless of whether the AI system is developed in-house or procured from a commercial provider. 实际后果非常重大。任何部署人工智能系统进行考试评分、推荐学习路径、确定录取或在评估期间监控学生行为的大学,都必须遵守广泛的义务:进行合规评估、维护技术文档、实施人工监督机制并确保数据质量。这些要求无论人工智能系统是内部开发还是从商业供应商采购,均同样适用。
2.2 The Emotion Recognition Ban === 2.2 情绪识别禁令 ===
Article 5(1)(f) of the AI Act prohibits AI systems that infer emotions of natural persons in the areas of workplaces and educational institutions, except where the AI system is intended to be put into service for medical or safety reasons. This prohibition, which took effect on 2 February 2025, has direct implications for AI-powered proctoring systems that monitor facial expressions, eye movements, and other biometric indicators to detect cheating during examinations. Several commercial proctoring platforms that rely on emotion inference — interpreting nervousness, distraction, or stress as indicators of academic dishonesty — may fall within the scope of this prohibition. 《人工智能法》第5条第1款第(f)项禁止在工作场所和教育机构领域推断自然人情绪的人工智能系统,除非该人工智能系统出于医疗或安全目的而投入使用。这一于2025年2月2日生效的禁令,对依靠面部表情、眼球运动和其他生物特征指标来检测考试作弊的人工智能监考系统具有直接影响。一些依赖情绪推断的商业监考平台——将紧张、分心或压力解读为学术不端的指标——可能属于该禁令的适用范围。
The emotion recognition ban reflects a distinctly European approach to the relationship between technology and human dignity. The EU’s position is that monitoring emotional states in educational contexts constitutes an unacceptable intrusion into students’ psychological integrity, regardless of the stated purpose. This position is informed by the Charter of Fundamental Rights of the European Union, which guarantees the right to human dignity (Article 1) and the right to the protection of personal data (Article 8). 情绪识别禁令体现了一种独特的欧洲方法,反映了技术与人的尊严之间的关系。欧盟的立场是,无论声称的目的如何,在教育情境中监控情绪状态构成对学生心理完整性的不可接受的侵犯。这一立场源于《欧盟基本权利宪章》,该宪章保障人的尊严权(第1条)和个人数据保护权(第8条)。
2.3 The AI Literacy Mandate === 2.3 人工智能素养要求 ===
Article 4 of the AI Act requires that providers and deployers of AI systems ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy. This obligation entered into force on 2 February 2025 — before most other provisions of the Act — signalling the EU’s view that AI literacy is a prerequisite for responsible AI governance. 《人工智能法》第4条要求人工智能系统的提供者和部署者确保其工作人员以及代表其处理人工智能系统操作和使用的其他人员具有足够的人工智能素养水平。这一义务于2025年2月2日生效——早于该法案大多数其他条款——表明欧盟认为人工智能素养是负责任的人工智能治理的先决条件。
For universities, this mandate has two dimensions. First, as deployers of AI systems (learning management systems, automated grading tools, plagiarism detection software, proctoring platforms), universities must ensure that faculty and administrative staff possess adequate AI literacy to use these tools responsibly. Second, as educational institutions, universities have a broader mission to develop AI literacy among their students — a mission that the AI Act’s literacy mandate reinforces without explicitly addressing. 对于大学而言,这一要求具有双重维度。首先,作为人工智能系统的部署者(学习管理系统、自动评分工具、抄袭检测软件、监考平台),大学必须确保教职人员和行政人员具备足够的人工智能素养以负责任地使用这些工具。其次,作为教育机构,大学有更广泛的使命来培养学生的人工智能素养——这一使命虽未被《人工智能法》的素养要求明确涉及,却因之得到了加强。
The implementation challenge is substantial. The EDUCAUSE 2024 AI Landscape Study found that fewer than one in four faculty and staff were aware of their institution’s formal AI policy, even though 80 percent reported using AI tools in their work (EDUCAUSE 2024). This gap between AI use and AI awareness represents a significant institutional risk — and a direct challenge to the AI Act’s literacy mandate. 实施挑战是巨大的。EDUCAUSE 2024年人工智能状况研究发现,不到四分之一的教职人员了解其所在机构的正式人工智能政策,尽管80%的人报告在工作中使用人工智能工具(EDUCAUSE 2024)。人工智能使用与人工智能意识之间的差距代表着重大的机构风险——也是对《人工智能法》素养要求的直接挑战。
3. China‘s AI Governance for Education == 3. 中国的教育人工智能治理 ==
3.1 The Interim Measures for Generative AI (2023) === 3.1 《生成式人工智能服务管理暂行办法》(2023年) ===
China‘s approach to AI governance in education operates through a layered system of national regulations, ministerial directives, and institutional policies. The foundational national regulation is the Interim Measures for the Management of Generative AI Services (生成式人工智能服务管理暂行办法), issued jointly by the Cyberspace Administration of China and six other agencies including the Ministry of Education, effective 15 August 2023. The Measures adopt what Chinese legal scholars describe as an „inclusive prudence“ (包容审慎) approach, combining development promotion with classified supervision (Migliorini 2024). 中国的教育人工智能治理通过国家法规、部委指令和机构政策的分层体系来运作。基础性的国家法规是《生成式人工智能服务管理暂行办法》,由国家互联网信息办公室和包括教育部在内的其他六个部门联合发布,自2023年8月15日起施行。该办法采取中国法律学者所描述的"包容审慎"方法,将发展促进与分类监管相结合(Migliorini 2024)。
For education specifically, the Interim Measures include provisions requiring that generative AI service providers prevent minors from developing „excessive reliance on or addiction to“ generative AI, and that services targeting minors meet additional safety requirements. Notably, the Measures include a research exemption that permits universities and research institutes to develop and test generative AI systems without the full registration and compliance requirements applicable to commercial providers — a pragmatic recognition that overly restrictive regulation could impede academic research. 在教育方面,该暂行办法包含要求生成式人工智能服务提供者防止未成年人对生成式人工智能产生"过度依赖或沉迷"的条款,并要求面向未成年人的服务满足额外的安全要求。值得注意的是,该办法包含研究豁免条款,允许高校和研究机构在不完全遵守适用于商业提供者的注册和合规要求的情况下,开发和测试生成式人工智能系统——这是对过度限制性监管可能阻碍学术研究的务实认识。
3.2 Ministry of Education Directives === 3.2 教育部指令 ===
The Ministry of Education has issued several directives specifically addressing AI in education. Most significantly, the Ministry has prohibited students from submitting AI-generated content as their own academic work — a directive that, while clear in principle, faces the same enforcement challenges encountered in other jurisdictions. Individual universities have supplemented this directive with institution-specific policies. 教育部已发布多项专门针对教育中人工智能应用的指令。最重要的是,教育部禁止学生将人工智能生成的内容作为自己的学术成果提交——这一指令虽然原则上明确,但面临着与其他管辖区相同的执行挑战。各高校以机构特定的政策补充了这一指令。
Fudan University became one of the first major Chinese universities to issue comprehensive AI guidelines in January 2026, establishing six specific prohibitions for thesis writing: banning the use of AI to generate or alter original data, experimental results, images, or thesis text; prohibiting AI tools in the thesis review process; and banning AI tools for language polishing and translation of theses. Penalties for severe violations include degree revocation (China Daily 2026). 复旦大学于2026年1月成为首批发布综合性人工智能指南的中国重点大学之一,为论文写作建立了六项具体禁令:禁止使用人工智能生成或篡改原始数据、实验结果、图像或论文文本;禁止在论文审查过程中使用人工智能工具;禁止使用人工智能工具进行语言润色和论文翻译。对严重违规行为的处罚包括撤销学位(China Daily 2026)。
Tianjin University of Science and Technology (天津科技大学) took a different approach in 2024, mandating that AI-generated content in undergraduate theses must not exceed 40 percent — a quantitative threshold that is measurable in principle but difficult to enforce in practice, particularly for texts that have been substantially rewritten by their human authors. 天津科技大学在2024年采取了不同的方法,要求本科论文中人工智能生成的内容不得超过40%——这一定量门槛原则上可衡量,但在实践中难以执行,特别是对于已被人类作者进行实质性改写的文本。
3.3 Mandatory AI Education from September 2025 === 3.3 2025年9月起的强制性人工智能教育 ===
China‘s most sweeping policy intervention is the mandate, effective 1 September 2025, that AI education be incorporated into all primary and secondary school curricula. The policy requires at least eight hours of AI education annually for students at all grade levels, including children as young as six. At the primary level, the focus is on AI literacy and exposure; at the junior high school level, on logic and critical thinking; and at the senior high school level, on applied innovation and algorithm design (Asia Education Review 2025). 中国最具深远影响的政策干预是自2025年9月1日起生效的要求,即将人工智能教育纳入所有中小学课程。该政策要求各年级学生(包括年仅六岁的儿童)每年至少接受八小时的人工智能教育。在小学阶段,重点是人工智能素养和接触;在初中阶段,重点是逻辑和批判性思维;在高中阶段,重点是应用创新和算法设计(Asia Education Review 2025)。
This mandate has implications for higher education because it means that, within a few years, university students will arrive with a baseline of AI literacy — unlike the current generation, most of whom encounter AI tools for the first time at university. Chinese universities must therefore prepare to build on this foundation rather than starting from scratch, which will require fundamental curriculum redesign. 这一要求对高等教育具有深远影响,因为这意味着在数年之内,大学生将带着人工智能素养基础入学——不同于当前大多数在大学才首次接触人工智能工具的一代学生。因此,中国的大学必须准备在这一基础上继续发展,而不是从零开始,这将需要对课程进行根本性重新设计。
Simultaneously, the policy prohibits students at primary schools from independently using „open-ended content generation“ tools and bans teachers from using generative AI as a substitute for core teaching activities. This dual approach — promoting AI literacy while restricting certain AI uses — reflects a nuanced understanding that AI competence and AI dependence are distinct phenomena that require distinct policy responses. 同时,该政策禁止小学生独立使用"开放式内容生成"工具,并禁止教师使用生成式人工智能替代核心教学活动。这种双重方法——推广人工智能素养同时限制某些人工智能用途——反映了对人工智能能力与人工智能依赖是需要不同政策应对的不同现象这一认识的细致理解。
4. Horizontal vs. Sector-Specific Regulation == 4. 横向监管与行业特定监管 ==
4.1 The EU’s Horizontal Approach === 4.1 欧盟的横向方法 ===
The EU AI Act represents a horizontal regulatory strategy: a single legislative framework governing AI across all sectors, including education. The advantages of this approach include consistency (the same principles apply to AI in healthcare, employment, law enforcement, and education), legal certainty (organizations need to understand one framework rather than many), and comprehensiveness (no sector falls through the regulatory cracks). 欧盟《人工智能法》代表了一种横向监管策略:一个涵盖所有行业(包括教育)的单一立法框架。这种方法的优势包括一致性(相同的原则适用于医疗保健、就业、执法和教育领域的人工智能)、法律确定性(组织只需理解一个框架而非多个)和全面性(没有行业会漏网)。
However, horizontal regulation also has disadvantages. The AI Act’s provisions are necessarily abstract — „high-risk“ classification criteria must accommodate AI systems as diverse as medical diagnostic tools and university admissions algorithms. This abstraction can make compliance challenging for specific sectors. Universities, accustomed to the relatively informal governance structures of academic life, may struggle to implement the formal conformity assessments, technical documentation requirements, and quality management systems that the AI Act demands of high-risk AI deployers. 然而,横向监管也有缺点。《人工智能法》的条款必然是抽象的——"高风险"分类标准必须容纳从医疗诊断工具到大学录取算法等各类不同的人工智能系统。这种抽象性可能使特定行业的合规变得困难。习惯于相对非正式学术治理结构的大学,可能在实施《人工智能法》对高风险人工智能部署者要求的正式合规评估、技术文档要求和质量管理体系方面遇到困难。
4.2 China‘s Sector-Specific Approach === 4.2 中国的行业特定方法 ===
China‘s approach, by contrast, is predominantly sector-specific. Rather than a single comprehensive AI law, China has developed a series of targeted regulations addressing specific AI capabilities: the Deep Synthesis Provisions (2023) governing deepfakes, the Interim Measures for Generative AI (2023), the Algorithmic Recommendation Management Provisions (2022), and sector-specific directives from relevant ministries including the Ministry of Education. 相比之下,中国的方法以行业特定为主。中国没有制定单一的综合性人工智能法,而是发展了一系列有针对性的法规,涵盖特定的人工智能能力:2023年《深度合成管理规定》规范深度伪造,2023年《生成式人工智能暂行办法》,2022年《互联网信息服务算法推荐管理规定》,以及包括教育部在内的相关部委的行业特定指令。
As O’Shaughnessy and Sheehan (2023) observe in their analysis for the Carnegie Endowment for International Peace, „the EU’s AI Act leans toward a horizontal approach (one framework across all sectors), while China‘s algorithm regulations incline vertically (sector-specific laws targeting specific AI capabilities).“ Both approaches face trade-offs: horizontal regulation provides consistency but risks being too vague; vertical regulation provides specificity but risks gaps and overlaps between sectors. 正如O'Shaughnessy和Sheehan(2023)在为卡内基国际和平基金会所作的分析中所指出的:"欧盟《人工智能法》倾向于横向方法(一个覆盖所有行业的框架),而中国的算法法规则倾向于纵向方法(针对特定人工智能能力的行业特定法律)。"两种方法都面临权衡:横向监管提供一致性但可能过于模糊;纵向监管提供针对性但面临行业间的空白和重叠风险。
For education specifically, China‘s sector-specific approach has the advantage of allowing the Ministry of Education to issue directives tailored to educational contexts — distinguishing, for example, between AI use in research (broadly permitted) and AI use in examinations (broadly prohibited). The disadvantage is regulatory fragmentation: university administrators must navigate multiple overlapping regulations without a single authoritative framework. 对于教育而言,中国行业特定方法的优势在于允许教育部发布针对教育情境的指令——例如区分人工智能在研究中的使用(广泛允许)和在考试中的使用(广泛禁止)。缺点是监管碎片化:大学管理者必须在没有单一权威框架的情况下应对多个重叠的法规。
4.3 Comparative Assessment === 4.3 比较评估 ===
A comprehensive comparative analysis of global AI regulation found that „many horizontal laws take a risk-based approach imposing stringent obligations on highest-risk systems, though definitions of ‘high-risk’ are inconsistent across jurisdictions“ (Royal Society Open Science 2025). The study confirmed that the EU favours a horizontal risk-based approach while China has favoured sector-specific laws tailored to specific use-cases, with neither approach clearly superior. 一项对全球人工智能监管的全面比较分析发现,"许多横向法律采取基于风险的方法,对最高风险系统施加严格义务,但各管辖区对'高风险'的定义不一致"(Royal Society Open Science 2025)。该研究确认欧盟倾向于横向的基于风险的方法,而中国倾向于针对特定用例量身定制的行业特定法律,两种方法都没有明显优势。
Chun, Schroeder de Witt, and Elkins (2024) characterize the fundamental difference as one of values: „The EU’s approach is characterised by commitment to fundamental rights, risk-based categorisation, and ethical oversight; China adopts a centralized, state-led approach integrating AI development with national objectives and social governance.“ For education, this means that EU regulation prioritizes protecting students’ rights (privacy, non-discrimination, human oversight), while Chinese regulation prioritizes both protecting students and advancing national AI competitiveness — goals that can converge but also conflict. Chun、Schroeder de Witt和Elkins(2024)将根本区别描述为价值观的差异:"欧盟的方法以对基本权利的承诺、基于风险的分类和伦理监督为特征;中国采取集中的、国家主导的方法,将人工智能发展与国家目标和社会治理相结合。"对教育而言,这意味着欧盟监管优先保护学生权利(隐私、非歧视、人工监督),而中国监管同时优先保护学生和推进国家人工智能竞争力——这些目标可以趋同但也可能冲突。
5. The Readiness Gap == 5. 准备度差距 ==
5.1 Faculty Awareness and Policy Implementation === 5.1 教职人员意识与政策执行 ===
The most significant challenge facing both regulatory systems is not the adequacy of regulation itself but the gap between regulation and practice. Empirical data from multiple jurisdictions reveal a consistent pattern: faculty and students are using AI tools far more rapidly than institutions are developing policies to govern their use. 两个监管体系面临的最重大挑战不是监管本身是否充分,而是监管与实践之间的差距。来自多个管辖区的实证数据揭示了一个一致的模式:教职人员和学生使用人工智能工具的速度远超机构制定管理政策的速度。
The EDUCAUSE 2024 AI Landscape Study, based on a survey of over 900 higher education technology professionals, found that only 23 percent of institutions had AI-related acceptable use policies in place. Nearly half (48 percent) of respondents disagreed that their institution has appropriate AI policies for ethical decision-making, and 54 percent disagreed that their institution has an effective AI governance mechanism (EDUCAUSE 2024). Yet 80 percent of faculty and staff reported using AI tools, and fewer than one in four were aware of a formal institutional AI policy (EDUCAUSE 2024 Action Plan). EDUCAUSE 2024年人工智能状况研究基于对900多名高等教育技术专业人员的调查发现,只有23%的机构制定了与人工智能相关的可接受使用政策。近一半(48%)的受访者不同意其所在机构具有适当的人工智能伦理决策政策,54%的人不同意其所在机构具有有效的人工智能治理机制(EDUCAUSE 2024)。然而,80%的教职人员报告使用人工智能工具,不到四分之一的人了解正式的机构人工智能政策(EDUCAUSE 2024 Action Plan)。
This gap is not unique to the United States. In the UK, the Higher Education Policy Institute (HEPI) 2025 survey found that 92 percent of undergraduate students now use generative AI in some form, up from 66 percent in 2024 — a dramatic year-on-year increase demonstrating the rapid normalization of AI tool usage (HEPI 2025). Yet institutional policies have not kept pace with this adoption. 这一差距并非美国独有。在英国,高等教育政策研究所(HEPI)2025年调查发现,92%的本科生现在以某种形式使用生成式人工智能,高于2024年的66%——年度增长幅度之大表明人工智能工具使用的快速常态化(HEPI 2025)。然而,机构政策尚未跟上这一采用速度。
5.2 The Chinese Readiness Landscape === 5.2 中国的准备度状况 ===
In China, Liu et al. (2025) investigated regulations, technology policies, and university attitudes to AI based on policy document analysis and 33 faculty interviews at Chinese research universities. They found that while faculty generally view AI as enhancing personalization, research productivity, and administrative efficiency, they raise concerns about academic integrity, algorithmic bias, and overreliance on technology. Crucially, China’s state-led governance model „is interpreted and enacted unevenly on university campuses“ — even centralized directives are implemented inconsistently at the institutional level (Liu et al. 2025). 在中国,Liu等人(2025)基于政策文件分析和33名中国研究型大学教职人员的访谈,调查了人工智能的法规、技术政策和大学态度。他们发现,虽然教职人员普遍认为人工智能增强了个性化教学、研究生产力和行政效率,但他们对学术诚信、算法偏见和对技术的过度依赖表示担忧。关键的是,中国的国家主导治理模式"在大学校园中的解读和执行并不一致"——即使是集中式的指令在机构层面也执行不一(Liu et al. 2025)。
A comparative analysis of AI regulation across mainland China, Hong Kong, and Macau found that even within a single country, „varied governance structures produce divergent approaches to AI in education„ — demonstrating that regulatory fragmentation exists not only between the EU and China but within China itself (Liu et al. 2025b). 一项对中国大陆、香港和澳门人工智能监管的比较分析发现,即使在一个国家内,"不同的治理结构产生了不同的教育人工智能方法"——表明监管碎片化不仅存在于欧盟和中国之间,也存在于中国内部(Liu et al. 2025b)。
5.3 Emerging Institutional Responses === 5.3 新兴的机构应对 ===
Despite the readiness gap, some institutions are developing innovative responses. In Austria, the HEAT-AI (Higher Education Act for AI) framework, developed at a University of Applied Sciences, categorizes AI applications in higher education into four risk levels mirroring the EU AI Act — unacceptable, high, limited, and minimal risk — and provides institution-specific guidance for each category (Temper, Tjoa, and David 2025). The framework went live in September 2024 and represents an attempt to translate the abstract provisions of the AI Act into concrete institutional practice. 尽管存在准备度差距,一些机构正在开发创新的应对措施。在奥地利,HEAT-AI(高等教育人工智能法案)框架由一所应用科技大学开发,将高等教育中的人工智能应用分为四个风险级别(与欧盟《人工智能法》相对应)——不可接受、高、有限和最低风险——并为每个类别提供机构特定的指导(Temper、Tjoa和David 2025)。该框架于2024年9月上线,代表了将《人工智能法》的抽象条款转化为具体机构实践的尝试。
A global study of emerging GenAI policies found that „only slightly more than half of examined institutions had publicly accessible GenAI guidelines, with most lacking formal policy at the institutional level“ (Aristombayeva et al. 2025). Policies addressing students and faculty are more prevalent than those addressing researchers — a gap that is particularly significant for research universities where AI is used extensively in the research process. 一项对新兴生成式人工智能政策的全球研究发现,"只有略超过一半的被调查机构拥有可公开访问的生成式人工智能指南,大多数机构在机构层面缺乏正式政策"(Aristombayeva et al. 2025)。针对学生和教职人员的政策比针对研究人员的政策更为普遍——这一差距对于人工智能在研究过程中被广泛使用的研究型大学尤为重要。
6. Surveillance vs. Support: The Proctoring Debate == 6. 监控与支持:监考辩论 ==
6.1 AI Proctoring in the EU Framework === 6.1 欧盟框架下的人工智能监考 ===
The EU AI Act‘s classification of AI systems that monitor student behaviour during tests as „high-risk,“ combined with the emotion recognition ban, creates a regulatory environment that is significantly more restrictive than any other jurisdiction. AI proctoring systems that rely on facial expression analysis, gaze tracking, or keystroke dynamics to infer cheating must comply with the Act’s full set of high-risk obligations — or, if they involve emotion inference, may be prohibited entirely. 欧盟《人工智能法》将在考试期间监控学生行为的人工智能系统归类为"高风险",加上情绪识别禁令,创造了一个比其他任何管辖区都更为严格的监管环境。依赖面部表情分析、视线追踪或按键动态来推断作弊行为的人工智能监考系统,必须遵守《人工智能法》对高风险系统的全部义务——如果涉及情绪推断,则可能被完全禁止。
A systematic review of AI-based proctoring systems identified security and privacy concerns, ethical concerns, trust in AI-based technology, lack of training among users, and cost as the major issues (Nigam et al. 2021). The review noted that „security issues associated with AI-based proctoring systems are multiplying and are a cause of legitimate concern.“ These concerns align with the EU’s precautionary approach. 一项对基于人工智能的监考系统的系统性综述确定了安全和隐私问题、伦理问题、对人工智能技术的信任、用户培训不足以及成本等主要问题(Nigam et al. 2021)。该综述指出,"与基于人工智能的监考系统相关的安全问题正在成倍增加,这是一个合理的关切。"这些担忧与欧盟的预防性方法相一致。
6.2 AI Surveillance in Chinese Education === 6.2 中国教育中的人工智能监控 ===
In China, the approach to AI-powered monitoring in education is markedly different. A comparative analysis of AI privacy concerns in higher education found that „Western outlets highlight individual privacy rights and controversies in remote exam monitoring, while Chinese coverage more frequently addresses AI-driven educational innovation“ (Xue et al. 2025). AI-powered surveillance systems in some Chinese universities — monitoring student behaviour, classroom engagement, and even facial expressions — have generated domestic debate, but the regulatory framework is more permissive than in the EU. 在中国,教育中人工智能监控的方法截然不同。一项对高等教育中人工智能隐私问题的比较分析发现,"西方媒体突出个人隐私权和远程考试监控中的争议,而中国的报道更多地涉及人工智能驱动的教育创新"(Xue et al. 2025)。中国一些大学的人工智能监控系统——监控学生行为、课堂参与度甚至面部表情——引发了国内辩论,但监管框架比欧盟更为宽松。
The contrast reflects fundamentally different cultural and legal traditions regarding the relationship between individual privacy and collective benefit. In the EU framework, student monitoring is an intrusion that must be justified by compelling necessity and constrained by proportionality. In the Chinese framework, monitoring is more readily accepted as a component of educational management, provided it serves institutional and national objectives. 这种对比反映了关于个人隐私与集体利益之间关系的根本不同的文化和法律传统。在欧盟框架中,学生监控是一种必须以迫切必要性来证明其正当性并受比例原则约束的侵入行为。在中国框架中,只要监控服务于机构和国家目标,就更容易被接受为教育管理的一个组成部分。
6.3 The Pedagogical Question === 6.3 教育学问题 ===
Beyond the legal and cultural dimensions, the proctoring debate raises a fundamental pedagogical question: does monitoring student behaviour during examinations improve educational outcomes, or does it merely optimize for compliance while undermining trust, intrinsic motivation, and the development of academic integrity as a personal value? 除了法律和文化层面,监考辩论还提出了一个根本性的教育学问题:在考试期间监控学生行为究竟是改善了教育成果,还是仅仅优化了合规性,同时破坏了信任、内在动机以及学术诚信作为个人价值观的培养?
The answer likely depends on context. In large-scale standardized assessments where the consequences of cheating are significant (professional licensing examinations, university entrance tests), some form of monitoring may be justifiable. In formative assessments designed to support learning rather than certify competence, monitoring may be counterproductive — replacing the educational relationship of trust between teacher and student with a surveillance relationship that undermines the very values it purports to protect. 答案可能取决于具体情境。在作弊后果严重的大规模标准化评估中(如专业执照考试、大学入学考试),某种形式的监控可能是合理的。在旨在支持学习而非认证能力的形成性评估中,监控可能适得其反——将师生之间的教育信任关系替换为一种监控关系,从而破坏了它声称要保护的价值观。
7. The „Brussels Effect„ and Global AI Governance == 7. "布鲁塞尔效应"与全球人工智能治理 ==
7.1 The Brussels Effect in AI Regulation === 7.1 人工智能监管中的布鲁塞尔效应 ===
The concept of the „Brussels Effect,“ theorized by Anu Bradford (2020), describes the EU’s capacity to unilaterally regulate global markets through stringent domestic regulation that multinational companies adopt worldwide for cost efficiency. The question of whether the AI Act will produce a Brussels Effect is actively debated. "布鲁塞尔效应"的概念由Anu Bradford(2020)提出理论化阐释,描述了欧盟通过严格的国内监管单方面规范全球市场的能力——跨国公司出于成本效率而在全球范围内采用这些标准。《人工智能法》是否会产生布鲁塞尔效应是一个活跃的辩论话题。
Siegmann and Anderljung (2022) argue that „both de facto and de jure Brussels Effects are likely for parts of the EU AI regulatory regime,“ particularly for large US technology companies whose AI systems are classified as high-risk. For education, this means that AI tools developed by global companies (Microsoft, Google, OpenAI) for use in European universities will likely be designed to comply with the AI Act’s requirements — and that these compliance features may be exported to other markets, including China. Siegmann和Anderljung(2022)认为,"欧盟人工智能监管制度的部分内容很可能产生事实上和法律上的布鲁塞尔效应",特别是对于其人工智能系统被归类为高风险的美国大型科技公司。对教育而言,这意味着全球公司(Microsoft、Google、OpenAI)为欧洲大学开发的人工智能工具可能被设计为符合《人工智能法》的要求——这些合规特性可能被出口到包括中国在内的其他市场。
However, Almada and Radu (2024) caution that the AI Act may also produce a „Brussels Side-Effect“: other jurisdictions may adopt the regulatory form (risk-based classification, documentation requirements) without the underlying rights-based substance that gives the EU framework its ethical force. 然而,Almada和Radu(2024)警告说,《人工智能法》也可能产生"布鲁塞尔副效应":其他管辖区可能采用监管形式(基于风险的分类、文档要求),但不采用赋予欧盟框架伦理力量的基于权利的实质内容。
7.2 China‘s Global AI Governance Initiative === 7.2 中国的全球人工智能治理倡议 ===
China has responded to the EU’s regulatory activism with its own vision for global AI governance. The Global AI Governance Initiative (全球人工智能治理倡议), announced by President Xi Jinping in October 2023, advocates a „people-centered approach“ to AI governance and calls for „mutual respect, equality, and mutual benefit“ among nations in AI development. As Racicot and Simpson (2024) note, the Initiative „should be seen in the context of China’s broader foreign policy objectives: its commitments to AI governance serve China’s geopolitical ambitions, including positioning itself as the standard-setter for developing nations.“ 中国以自己的全球人工智能治理愿景回应了欧盟的监管积极性。《全球人工智能治理倡议》由习近平主席于2023年10月宣布,倡导"以人为本"的人工智能治理方法,并呼吁各国在人工智能发展中"相互尊重、平等互利"。正如Racicot和Simpson(2024)所指出的,该倡议"应该在中国更广泛外交政策目标的背景下来看待:中国对人工智能治理的承诺服务于其地缘政治雄心,包括将自己定位为发展中国家的标准制定者。"
For higher education, the competition between the EU’s Brussels Effect and China‘s Global AI Governance Initiative creates a complex regulatory landscape. International universities — particularly those with joint programmes spanning EU and Chinese institutions, such as those within the Jean Monnet Centre of Excellence network — must navigate both frameworks simultaneously. The companion chapter on data protection (Woesler, this volume) examines the practical challenges of dual compliance in detail. 对于高等教育而言,欧盟布鲁塞尔效应与中国全球人工智能治理倡议之间的竞争创造了一个复杂的监管格局。国际性大学——特别是那些拥有跨越欧盟和中国机构联合项目的大学,如让·莫内卓越中心网络内的大学——必须同时驾驭两个框架。关于数据保护的相关章节(Woesler,本卷)详细探讨了双重合规的实际挑战。
8. Towards Responsible AI Integration: Recommendations == 8. 走向负责任的人工智能融合:建议 ==
Based on the comparative analysis presented in this article, we offer the following recommendations for universities navigating the emerging AI governance landscape. 基于本文提出的比较分析,我们为正在驾驭新兴人工智能治理格局的大学提供以下建议。
First, develop institution-specific AI policies that translate abstract regulatory requirements into concrete guidance for faculty and students. The HEAT-AI framework (Temper, Tjoa, and David 2025) provides a useful model for institutions seeking to operationalize risk-based AI governance at the institutional level. 第一,制定将抽象监管要求转化为教职人员和学生具体指导的机构特定人工智能政策。HEAT-AI框架(Temper、Tjoa和David 2025)为寻求在机构层面实施基于风险的人工智能治理的机构提供了有用的模式。
Second, invest in AI literacy as a core institutional competency — not only for students but for faculty and administrative staff. The EU AI Act‘s literacy mandate (Article 4) provides a regulatory impetus, but the pedagogical imperative exists independent of regulation: faculty who do not understand the AI tools their students use cannot teach or assess effectively. 第二,将人工智能素养作为核心机构能力进行投资——不仅面向学生,也面向教职人员和行政人员。欧盟《人工智能法》的素养要求(第4条)提供了监管动力,但教育学上的紧迫性独立于监管之外:不了解学生使用的人工智能工具的教职人员无法有效地教学或评估。
Third, distinguish between AI governance for research and AI governance for assessment. The Chinese approach, which broadly permits AI use in research while restricting it in examinations and theses, provides a pragmatic distinction that European institutions would benefit from adopting. Research and assessment serve different purposes and face different risks; a single AI policy cannot adequately address both. 第三,区分研究领域的人工智能治理和评估领域的人工智能治理。中国的方法广泛允许在研究中使用人工智能,同时在考试和论文中加以限制,提供了一种务实的区分方式,欧洲机构将从采纳中获益。研究和评估服务于不同的目的,面临不同的风险;单一的人工智能政策无法充分解决两者的问题。
Fourth, design assessment methods that evaluate distinctly human contributions — critical analysis, original thinking, creative synthesis, ethical reasoning — rather than relying on AI detection tools whose accuracy is contested and whose use may undermine trust between teachers and students. 第四,设计评估方法来评价人类独特的贡献——批判性分析、原创思维、创造性综合、伦理推理——而不是依赖准确性受到质疑且可能破坏师生信任的人工智能检测工具。
Fifth, engage in cross-border dialogue. The EU-China comparison reveals that each system has developed insights that the other lacks. European institutions can learn from China‘s speed of implementation and its integration of AI competencies across all disciplines. Chinese institutions can learn from the EU’s emphasis on ethical frameworks, faculty governance, and the protection of student rights. Joint programmes, exchange visits, and collaborative research — such as those facilitated by the Jean Monnet Centre of Excellence programme — provide concrete mechanisms for this dialogue. 第五,参与跨境对话。欧盟-中国的比较揭示了每个体系都发展出了另一个体系所缺乏的洞见。欧洲机构可以从中国的实施速度以及将人工智能能力融入所有学科的做法中学习。中国机构可以从欧盟对伦理框架、教职人员治理和学生权利保护的强调中学习。联合项目、交流访问和合作研究——如让·莫内卓越中心项目所促进的——为这一对话提供了具体机制。
9. Conclusion == 9. 结论 ==
The governance of AI in higher education is a challenge that no single regulatory framework has yet adequately addressed. The EU AI Act provides the most comprehensive legal framework, but its abstract provisions require institutional translation, and its implementation faces a significant readiness gap. China‘s sector-specific approach provides more targeted guidance for educational contexts, but its effectiveness depends on consistent institutional implementation and risks fragmentation across regulatory instruments. 高等教育中人工智能的治理是一项尚未被任何单一监管框架充分应对的挑战。欧盟《人工智能法》提供了最全面的法律框架,但其抽象条款需要机构层面的转化,且其实施面临显著的准备度差距。中国的行业特定方法为教育情境提供了更有针对性的指导,但其有效性取决于一致的机构执行,且面临跨监管工具碎片化的风险。
The most promising path forward combines elements of both approaches: the EU’s commitment to fundamental rights, risk-based classification, and ethical oversight; and China‘s willingness to mandate AI literacy at scale, its pragmatic distinction between research and assessment contexts, and its speed of institutional adaptation. Neither system can learn everything from the other — the cultural, political, and legal differences are too profound. But each can learn something, and the dialogue between them is itself a valuable contribution to the global challenge of governing AI in education responsibly. 最有前景的前进道路结合了两种方法的要素:欧盟对基本权利、基于风险的分类和伦理监督的承诺;以及中国大规模推行人工智能素养的意愿、在研究与评估情境之间的务实区分以及机构适应速度。两个体系都不可能从对方身上学到一切——文化、政治和法律差异太过深刻。但每个体系都可以学到一些东西,它们之间的对话本身就是对负责任地治理教育中人工智能这一全球挑战的宝贵贡献。
As UNESCO’s Guidance for Generative AI in Education and Research (Miao and Holmes 2023) reminds us, „the absence of national GenAI regulations in most countries leaves user data privacy unprotected and educational institutions largely unprepared.“ The EU and China are among the few jurisdictions that have moved beyond this absence toward substantive governance frameworks. Their experiences — successes and failures alike — will shape how the rest of the world approaches the challenge of AI in higher education. 正如联合国教科文组织的《生成式人工智能在教育和研究中的指导》(Miao和Holmes 2023)提醒我们的:"大多数国家缺乏国家层面的生成式人工智能法规,使用户数据隐私得不到保护,教育机构在很大程度上缺乏准备。"欧盟和中国是少数已经从这种缺失走向实质性治理框架的管辖区。它们的经验——无论是成功还是失败——将塑造世界其他地区应对高等教育中人工智能挑战的方式。
Acknowledgments == 致谢 ==
This research was supported by the Jean Monnet Centre of Excellence „EU-Studies Centre: Digitalization in Europe and China„ (EUSC-DEC), funded by the European Union under Grant Agreement No. 101126782. Views and opinions expressed are those of the author only and do not necessarily reflect those of the European Union. 本研究得到了让·莫内卓越中心"欧盟研究中心:中欧数字化"(EUSC-DEC)的支持,由欧盟资助协议编号101126782提供资金。所表达的观点和意见仅代表作者本人,不一定反映欧盟的立场。
References [REF] 参考文献
Almada, M., & Radu, A. (2024). The Brussels Side-Effect: How the AI Act can reduce the global reach of EU policy. German Law Journal, 25, 646–663. [REF] Almada, M., & Radu, A. (2024). The Brussels Side-Effect: How the AI Act can reduce the global reach of EU policy. German Law Journal, 25, 646–663.
Aristombayeva, M., Satybaldiyeva, R., Maung, B. M., & Lee, D. (2025). Guiding the uncharted: The emerging (and missing) policies on generative AI in higher education. Frontiers in Education, 10. DOI: 10.3389/feduc.2025.1644081. [REF] Aristombayeva, M., Satybaldiyeva, R., Maung, B. M., & Lee, D. (2025). Guiding the uncharted: The emerging (and missing) policies on generative AI in higher education. Frontiers in Education, 10. DOI: 10.3389/feduc.2025.1644081.
Asia Education Review. (2025). China makes AI education mandatory in schools starting September 2025. [REF] Asia Education Review. (2025). China makes AI education mandatory in schools starting September 2025.
Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press. [REF] Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
China Daily. (2026, January 12). Fudan University sets AI education guidelines. [REF] China Daily. (2026, January 12). Fudan University sets AI education guidelines.
Chun, J., Schroeder de Witt, C., & Elkins, K. (2024). Comparative global AI regulation: Policy perspectives from the EU, China, and the US. arXiv preprint, arXiv:2410.21279. [REF] Chun, J., Schroeder de Witt, C., & Elkins, K. (2024). Comparative global AI regulation: Policy perspectives from the EU, China, and the US. arXiv preprint, arXiv:2410.21279.
EDUCAUSE. (2024). 2024 EDUCAUSE AI Landscape Study. Robert, J., & McCormack, M. [REF] Migliorini, S. (2024). China's Interim Measures on generative AI: Origin, content and significance. Computer Law and Security Review, 53, Article 105992.
EDUCAUSE. (2024). 2024 EDUCAUSE Action Plan: AI Policies and Guidelines. [REF] Xue, Y., Chinapah, V., & Zhu, C. (2025). A Comparative Analysis of AI Privacy Concerns in Higher Education: News Coverage in China and Western Countries. Education Sciences, 15(6), 650. DOI: 10.3390/educsci15060650
European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. [REF] Robert, J., & McCormack, M. (2024). 2024 EDUCAUSE AI Landscape Study. EDUCAUSE.
Freeman, J. (2025). Student Generative AI Survey 2025. HEPI Policy Note 61. Higher Education Policy Institute. [REF] EDUCAUSE. (2024). 2024 EDUCAUSE Action Plan: AI Policies and Guidelines.
Liu, X., et al. (2025). Regulations, technology policies and universities’ attitudes to artificial intelligence in China. Higher Education Quarterly, 79(4). DOI: 10.1111/hequ.70055. [REF] European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
Miao, F., & Holmes, W. (2023). Guidance for Generative AI in Education and Research. UNESCO: Paris. [REF] Freeman, J. (2025). Student Generative AI Survey 2025. HEPI Policy Note 61. Higher Education Policy Institute.
Migliorini, S. (2024). China‘s Interim Measures on generative AI: Origin, content and significance. Computer Law and Security Review. 53, Article 105992. [REF] Liu, X., et al. (2025). Regulations, technology policies and universities' attitudes to artificial intelligence in China. Higher Education Quarterly, 79(4). DOI: 10.1111/hequ.70055.
Nigam, A., Pasricha, R., Singh, T., & Churi, P. (2021). A systematic review on AI-based proctoring systems. Education and Information Technologies, 26, 6421–6445. [REF] Miao, F., & Holmes, W. (2023). Guidance for Generative AI in Education and Research. UNESCO: Paris.
O’Shaughnessy, M., & Sheehan, M. (2023). Lessons from the world’s two experiments in AI governance. Carnegie Endowment for International Peace. [REF] Nigam, A., Pasricha, R., Singh, T., & Churi, P. (2021). A systematic review on AI-based proctoring systems. Education and Information Technologies, 26, 6421–6445.
Racicot, R., & Simpson, K. H. (2024). China‘s AI Governance Initiative and its geopolitical ambitions. Centre for International Governance Innovation (CIGI). [REF] O'Shaughnessy, M., & Sheehan, M. (2023). Lessons from the world's two experiments in AI governance. Carnegie Endowment for International Peace.
Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation. AI and Society, 36, 59–77. [REF] Racicot, R., & Simpson, K. H. (2024). China's AI Governance Initiative and its geopolitical ambitions. Centre for International Governance Innovation (CIGI).
Hilliard, A., Gulley, A., Kazim, E., & Koshiyama, A. (2026). Artificial intelligence policy worldwide: A comparative analysis. 13(2), 242234. [REF] Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation. AI and Society, 36, 59–77.
Siegmann, C., & Anderljung, M. (2022). The Brussels Effect and Artificial Intelligence. arXiv preprint, arXiv:2208.12645. [REF] Hilliard, A., Gulley, A., Kazim, E., & Koshiyama, A. (2026). Artificial intelligence policy worldwide: A comparative analysis. Royal Society Open Science, 13(2), 242234. DOI: 10.1098/rsos.242234.
Temper, M., Tjoa, S., & David, L. (2025). Higher Education Act for AI (HEAT-AI): A framework to regulate the usage of AI in higher education institutions. Frontiers in Education, 10. DOI: 10.3389/feduc.2025.1505370. [REF] Siegmann, C., & Anderljung, M. (2022). The Brussels Effect and Artificial Intelligence. arXiv preprint, arXiv:2208.12645.
Woesler, M. (this volume). Learning a foreign language with and without AI: An empirical comparative study. [REF] Temper, M., Tjoa, S., & David, L. (2025). Higher Education Act for AI (HEAT-AI): A framework to regulate the usage of AI in higher education institutions. Frontiers in Education, 10. DOI: 10.3389/feduc.2025.1505370.
Woesler, M. (this volume). Student data protection in the digital university: GDPR and China‘s PIPL compared. [REF] Woesler, M. (this volume). Learning a foreign language with and without AI: An empirical comparative study.
Woesler, M. (this volume). University of the future: AI-enhanced higher education between European humanism and Chinese innovation. [REF] Woesler, M. (this volume). Student data protection in the digital university: GDPR and China's PIPL compared.
Xue, Y., Chinapah, V., & Zhu, C. (2025). A Comparative Analysis of AI Privacy Concerns in Higher Education: News Coverage in China and Western Countries. Education Sciences, 15(6), 650. DOI: 10.3390/educsci15060650 [REF] Woesler, M. (this volume). University of the future: AI-enhanced higher education between European humanism and Chinese innovation.
第一部分:基础与框架(续)

References