Difference between revisions of "Rethinking Higher Education/Chapter 2"

From China Studies Wiki
Jump to navigation Jump to search
 
Line 1: Line 1:
 +
<div style="background-color: #e3f2fd; border: 1px solid #2196F3; padding: 8px; margin: 0 0 15px 0; border-radius: 4px; font-size: 0.9em;">
 +
[[Rethinking_Higher_Education/Chapter_2|EN]] · [[Rethinking_Higher_Education/Chapter_2/zh|ZH]] · [[Rethinking_Higher_Education/Chapter_2/en-zh|EN-ZH]] · [[Rethinking_Higher_Education|← Book]]
 +
</div>
 +
 
== Teaching Means – Humanity and AI from a Philosopher’s View ==
 
== Teaching Means – Humanity and AI from a Philosopher’s View ==
  

Revision as of 05:53, 8 April 2026

EN · ZH · EN-ZH · ← Book

Teaching Means – Humanity and AI from a Philosopher’s View

Ole Doering

Hunan Normal University

Abstract

This paper discusses the nature of ‘AI’, as a challenge for human self-understanding and a task for pedagogy, to come to terms with the best of our knowledge. From a philosophical perspective, the outlook depends upon the chosen anthropology, which could integrate biological and social dimensions of technology literacy. The ways in which human learning is naturally programmed give us hope and dignity, regarding our ability to refresh our education system accordingly, stimulated by the need to come to terms with ‘AI’.

1 Introduction

How should we make sense out of the phenomenon that ‘AI’ has become a connecting theme for humanity in the 21 Century? How realistic is this theme? Is ‘AI’ already established as a ubiquitous technology in university teaching, or is its common use just beginning? Are the questions related to its suitability and application fundamentally the same everywhere or are they context sensitive? What is the perception of purpose and best practice in education and how could Europe and China learn from each other when exploring standards of its implementation?

In this paper, I am looking at the use of ‘AI’ in teaching, learning and pedagogy, through the lens of a healthy relation and reasonable human attitude towards AI, considering the objective of learning, the characteristics of this technology and the nature of humanity. The title includes an ambiguous term, ‘teaching means’. Its semantics encompass the interrelation between teaching and its means (tools and measures), what it actually means to teach (instead of preaching or mimicking), to instruct in the proper use of suitable means, and the implements for good teaching. The coherence of this ambiguity is established through philosophical acts of alignment of humanity and ‘AI’, that is, enquiring into the use of language and the operations of coherence and integrity.

The public marketing of the technology uses ‘AI’ as a general label across different industrial sectors. More accurately, one should talk about purpose-built machine learning based algorithmic processors of different types of data. The meaning of ‘artificial’, especially in its relation to natural and cultural is important but not standardized, so it needs to be determined. The meaning of ‘intelligence’ is even more important, because it does not merely express our understanding of the matter but is connected with the potential for agency. Hence, it is a matter of clarity and responsibility, to define these core terms, as names of a technology, the perception and application of which has great impact on human self-awareness, on the organization of crucial activities and the understanding of best practices, which contribute to the frameworks of curricula, regulations and social order.

Surprisingly little is being discussed regarding the proper naming of the technology, its processes and features. Considering the power of language and the responsibility, to use words to convey robust or truthful meaning, this failure needs to be redressed, considering the scale of the impact of imagination on social reality. It cannot suffice to refer to the habitual nonchalance in which society receives the linguistic design from interested parties. The absence of critical responses to instrumental usage of suggestive language, such as regarding advertisements, policy or sales pitches, or marketing campaigns, the most blatant recent example is the name ‘Meta-Verse’, promoting cyber virtual reality projections as a higher form of ontology, replacing metaphysics with wishful thinking. Whereas the metaverse has apparently failed, by a lack of infrastructure for both hardware and software, a monopolistic approach to platform development, and a lack of clear governance standards, the same vigor of hype and hubris is at work in portraying ‘AI’ as a novel entity beyond human comprehension, as it has been done with several previous technology breakthroughs driven by marketing expectations, including genomics as revolutionizing health or information technology as revolutionizing knowledge. On one hand, the revenues for some industries and many careers were boosted. On the other hand, a sizable sustainable impact on social welfare, stability, health and knowledge has yet to be seen, owing to exaggerated expectations, misleading perceptions and the limited shelf-life of shallow promises.

Intelligence is understood as a function of reason (Vernunft) or rationality (Verstand), to make spontaneous connections between patterns of knowledge and perceptions, as part of the operations of experience (Erlebnis and Erfahrung). The more specifically defined German terminology is used when this adds to the desired precision of the analysis. The connectivity is a natural technical feature of human intelligence, but not reducible to cognitive performance. In humans, intelligence is an activity and quality, woven into the entire fabric of sensuality, perception and reflection, the extended body (Leib), social interaction, empathy, language, guessing, learning by doing, etc. Intelligence is incomplete, alive, contradictory, generic, repetitive and conservative, spontaneous and functional. Re-making it artificially is possible by reduction and simulation, through models and incremental advancement. The astonishing progress in the related science and technology advancement should inspire sober critical scrutiny rather than

This paper is based on a lecture and takes the form of an annotation to a debate that ought to take place.

2 Humanity in AI

Language is the expressed experience of the operation of reason, within a culture and across cultures, connected by human reason-gifted nature. Learning in the sense of extending knowledge is the continuation of this experience, in a pedagogical way that engages natural and reflected language operations. Pedagogy keeps this process of engagement practical, according to human standards and individual capabilities.

2.1 The human position

In the perspective of the historical evolution of humanity, the meaning of ‘AI’, as an efficient set of invisible tools based on mathematical processes and simulations of language and cognition, should be understood in relation to natural language and learning, namely, how to use it well.

Language represents knowledge culture and cultured knowledge. Knowledge culture is a social-epistemic environment that encourages, favors and supports knowledge as befits human beings according to our nature. Cultured knowledge is naturally grown structure and content of what we have learned, in ways that accommodate humanity. Technology is a priori instrumental, no matter its degree of intellectuality, materiality technicality, it is technology only when it can be and is meant to be used to serve imagined ends. The ambiguous, often metaphorical language of technology description can confuse the relationship between humanity and technology and its proper understanding. Typically, the ambiguity which is characteristically rooted in the instrumental ontology of technology, is taking the form of either disproportional faith in what the latter can achieve, (thereby ‘forgetting’ the subject agent), or pre-emptive submission of human inferiority under its fictitious omni-potency (thereby ignoring the limitations of a mere object). When combined with hype, hubris or strategic communication, this confusion can be naturally amplified or purposefully manipulated, so that consumption and obedience remain as the sole human roles. However, it can also be countered by enlightenment, that is, by utilizing reasonability, proportionality and humility, so that best use can be enabled.

Therefore, responsible, sincere and accurate language is needed, for a clear understanding and a healthy attitude towards our technologies, especially those that directly affect our cognition, through simulation or speech. Thereby, can overcome ontological confusions, for example when redundant activities such as gaming are misconstrued as ‘to play’, and epistemic confusion, such as when quality is measured by outcomes instead of performance. That is, beginning with the input.

An educated view on language will help establish a position, in which ‘to talk about what I know, and at the same time, know what I talk about, and how, considering the purpose’ of my speech. This means, language is instrumental for speaking truthful about ‚AI’, not just use ‘empty words’ without language. When facing an instrumental object, this relationship should be defined, first, whether and in what sense it is reasonable to trust machines more than humans. This leads to the requirement of shared roles. Can we trust humans to control machines so that they will serve our purpose and not just function according to design? With this attitude, ethics and science converge in a culture of deliberate humility, caution and care.

2.2 The economic angle

Like with many other technological innovations before, ‘AI’ inspires debate over the future of traditional ways of life. For example, the ‘Generative Pre-trained Transformer’, namely GPT, shows potential to expedite the mechanical operations of linguistic functions by orders of magnitude, provided energy and data are sufficiently well supplied. It might revolutionize online researches or the translation of written or spoken text. Naturally, there is concern about this technology’s impact on human work. Will it become obsolete? Can work be replaced by production? In the past, such fears were based on evidence of the labor market. Mechanically repetitive activities in the workflow were automated, then automation replaced several areas of blue collar, at the same time increasing demand for specialized or generalized white-collar activities.

Notably, the strategies of human resources development follow specific economic rationales, which are determined by policies and politics that define what is valuable and incentivized. The investment from society, direct and indirect, into the making of these opportunities and technological advantages, was taken in, without due consideration of the difference between market price and economic value of the machines. The practice of combined manufacturing replacement and social displacement was driven by opportunity that partly depended on oversight or carelessness. More sustainable options for a constructive employment of machines were not explored. The focus was only on the re-training or social aid for laid-off employees. Compensation of the added value and social cost of the inventions, that is normally calculated, as taxes or fees, was not even considered. Much of the cascading double-effect of accelerated innovation and social problems could be compensated from the onset, with replacement taxes or social dues, in proportion with the investment from society which enabled these developments. Whichever the means, the gain would be time, for learning, studying, planning, adding social benefit.

There would also be a cultivating effect, by taking some desperation out of the greed for quick revenue. Buying time would also mean a re-allocation of this most precious resources, to a variety of value-providers in society for the benefit of a more sustainable economic development. Considering that most of the advantages of AI-driven inventions lie in the efficiency gain and effectiveness indifference, there can be various scenarios for additional trade-offs when such strategy were pursued. This cultivating effect would earn the revenue of consumers’ maturity, by allowing them to get use to the new tools, to see them as objects and find out how they matter. The absence of such cultivation efforts translates into the need for general and special education, namely in technology us and economics. This is evident from the standpoint of humanity, serving the people and leaving no one behind, on the path of democratic modernization.

Within such an environment, work as an anthropological necessity and a source of dignity in society, would be rehabilitated as a social good connected with education. It would increase social integrity, counter new stratification and sustain sustainable development. The fact that we are not used to leading such debates in a reasonable way publicly, but are driven by hasty responses to relentless new market-impulses, indicates the need to dig deeper than mending superficial phenomena, opportunities or conflicts. This is the first contribution to a debate on ‘AI’ emancipated from the technological rationale.

2.3 Aesthetic quality

Aesthetics in general connects coherent perception with normative judgement, transcending what is beautiful into the concept of beauty, what is good into the concept of goodness, or what is truthful into the concept of truth, so that such concepts can carry the learning momentum teleologically towards the concept of cultivation.

This prospect, however, is enabled by human experience, of the world, of itself and of intellectual reflection. Our dreams of perfection remain as ambiguous as they are ambitious, unless they are matched, balanced and integrated by the experience of agency. The most inclusive microcosm to which aesthetics can be applied is ones own extended human body (Leib), as the agent of cultivation and education. This is where we distinguish the original from simulation, responsible agency from play, meaningful coherence from logical plausibility, purpose from function, dignity from value, work from activity, learning (Bildung) from training (Ausbildung). This is where we also revisit the logical relevance of classical forms of indirect speech, such as analogy, metaphor, correlation, irony, humor or allusion.

As a social entity, such a body has a face. Can we give it real human face and purpose in education? Even when it is convenient, or appealing or consoling, to offer a substitute instead of the natural you, a perfect simile of a human caregiver, such as in nursing or teaching, deploying humanoid robots to handle the needs of vulnerable groups, may be less risky socially than cheating well-off citizens with a friendly mask, but is certainly unfair. The resources saved through this shortcut, (if not miserly), could better be invested in the real development of social relationship that can function well under conditions of the 21st Century.

A human face and human shape belong to humans, not machines. Irony, arts and comedy have taught modern citizens the value of liberal playfulness with anything, challenging traditional taboos, in order to overcome limitations of humanity that were based on external powers, (such as political institutions or religious authorities). However, there is a necessary natural tension between playful expression and transgressions, and human social sustainability; social freedom is limited by the freedom of the other, which includes cherishing human nature or tradition. The margins of tolerance must be re-negotiated while technology advances, so as to prevent that simulation and mimicry design become novel forms of fraudulent language. Therefore, traditional knowledge about simulation and truth should be mobilized.

The same applies to the wisdom of language. This is not a condescending phrase to mollify conservative thought, but a philosophical insight formulated by Gadamer, „Language is the universal medium in which understanding itself takes place. The manner in which understanding is realized is exegetic”. In other words, the meaning is in the use of the language. This is the beginning of the value of analytical approaches to philosophy.

When we talk about digital inventions, we cannot start with smartphones, but with tools and language itself, as physical and intellectual implements of digitalization. The word ‘digital’ derives from the Latin digitus, meaning ‘finger’ and extensions of the hand: wooden sticks and stone tools mark the beginning of human ability to shape the material world according to purpose and to thereby establish new horizons. This is literally the beginning of manufacturing, (use hands to make something). Understanding the meaning of technology, as the means to transform ‘that what is given’, data, into ‘that what is made’, facts, helps appreciate what matters in learning to manu-facture.

This enquiry also helps us, to take one further step, towards the mapping of the conceptual landscape of learning and ‘AI’, namely connect ontology and epistemology. Manipulating the matter of symbols of meaning, organically and mentally, takes the form of reading and writing, that is the ability to draft alternative designs of the world, of truth, beauty and good, to create space for imagination, that is, for exploring new ways, to play according to, and even testing known rules, thereby inspiring culture. Using language instrumentally, in speech and writing, mark the beginning of the human ability to connect across time and space according to purpose – that is, of intellectual learning. In the same vein, misunderstanding and fault, twisting and lying enter cultural life. The aesthetic perspective makes sure that there remain options for aligning different approaches to using ‘AI’ within one trajectory of wholesome cultivation, culminating in the connotations and expressions of language.

2.4 Our task

The practice of knowledge changes. For humans, it remains naturally digital. Culture is a natural human technology. Culture and technology are functional expressions of human nature. Over time, we disconnect technical learning from evolutionary (practical) learning, in acts of specialization. This is paralleled by scientific reductionism in disciplines within the science field (Wissenschaft). In both cases, how to assure the humanity-driven alignment of areas of knowledge? Should the emerged silos be re-synchronized, synthesized or transformed into a new kind of competence, suited for the 21st Century, in a healthy manner? How can we learn to program methodical specialization so that it can serve the evolution of humanity?

Our capability to manipulate the world extends macro-and microscopic boundaries, feeding fantasy and fiction, to drive the imagination. What had been the traditional projections of the margins of humanity, such as golem, homunculus, androids or cyborg, now seem to require new, timely images. Traditional narratives were carried by imagination based on very little experience, typically under the provisional label of science fiction. Now, we need names and stories that are based on updated experiences, with the social practice of technology applications in biology, information processing, chemistry or physics. Such narratives cannot not be chiefly shaped by economic or sci-tec stakeholders, whose language and interest are not fully rooted in society. They cannot realistically be left to experts or natural social evolution, either, because we have learned that the power of technology development and marketing is muting common sense. Certainly, hyped fears and promises, dramatized expectations and premature market entry are no new phenomena. However, the impact of unresolved issues with the existing, pre-maturely implemented products, such as smart phones, is incrementally increasing in the uncertainty of ensuing risks, while at the same time the cultural resilience required for proper assessment and societal processing has been eroding over decades.

This is evident in three areas, on different levels.

First, there is a widely nurtured confusion about agency. When talking about „what ‘AI’ does”, the grammatical subject and object must be clearly distinguished from the real subject and object. The dynamic composite artefact we call ‘AI’ owes its entire existence, functionality and evolution to human agency, no matter how much they agree with the intended performance. We should avoid socially rich semantical connotations, such as ascribing learning, thinking or suggesting to ‘AI’. A technology deserves technical terminology, not emotionalized labeling. The suggested anthropomorphism is begging the question of an emerging self-consciousness or even conscience and inflates the probability of increasingly automated-perfection, even when the axiomatic assumption is not shared, that it is impossible for reasons a priori. This is not a benign slip (any longer), because the very purpose of the technology is to amplify efficiency, that is, power. Thereby, by design and default, miniscule issues, faults, risks or dangers are amplified as well. What used to be benign and speculative might now have huge and deep real consequences. This is why painstaking scrutiny of the propriety of language is called for.

Second, compared with the circles of social accommodation to earlier technology immersions in society, the sequences in which new dimensions of efficiency and impact are being introduced, are getting ever more hastily and truncated. Here ‘introduction’ is a euphemism, because the insinuated social decency is not observed. Scientific disciplines such as Technology Assessment (Technikfolgenabschätzung), technology ethics, applied mathematics (Informatik) and general skills in understanding these inventions have not been established as major subjects. This implies that most citizens are virtually illiterate when it comes to ‘AI’.

Third, substitution of human activity by machines is part of their desired purpose. However, it must be reasonably planned, with suitable measures attached, in proportion towards the entire process, not just investment and revenue. As mentioned above in the passage about work and production, what applies now in the area of language should have been applied in economics decades ago, namely honesty about the genesis and distribution of added value generated by machines that supplant traditional labor.

This is a typical situation that calls for institutional structural solutions, in order to ensure purpose and sustainability of policies. ‘AI’ literacy, concerning humans, is a germane metaphor. It barely expands the literal meaning of being able to read, calculate and write in a cultivated manner about the symbolic and physical operations of the hard- and software, within their ontological, economic and infrastructural context. Herein lies the obvious task for a reformed educational sector. Although applied interdisciplinary curricula should have remained, or have been rehabilitated, as the gold standard in education in general, in any field, such as health, economics, history or philosophy, ‘AI’ as a social sphere, (and significantly connected indeed with health, economics, history or philosophy), could be the culmination point.

The means to understand the natural balance between what we can do, what we may do and what we must do are challenged. Traditional knowledge from the classics, for prudence and pragmatism, needs revisiting: How to do the right things for the right reasons; how to know what you do and do as you know it; how to say what you mean and mean what you say, regarding what ‘AI’ is and what it means. Considering responsibility, normative prudence is a meta-level requirement of digital competence (in morality, law and ethics).

2.5 Pedagogy: How to learn from being prepared for ‘AI’

Bringing the language, the conceptual and institutional discourse about ‘AI’ and pedagogy towards a timely state of experience and knowledge, practically means to coordinate science and nature so that we can support cultivated learning.

The related most basic understanding derives from applied holistic theories about learning programs, in combination with neurological research on the actual brain development, as the biological habitat for learning. Humans learn from integrated processes of experience involving both, the respective individual and the social factors. Philosophy offers interpretations, of how to make sense out of the overall practical implications of this disconnected multi-disciplinary knowledge.

Considering learning, the first primary experience (Erleben) spontaneously connects the sensual individual with the environment. This is not a neutral act but a keynote for the learning biography and epistemic career. The quality of this connection determines the range of involuntary perception, together with the subjective mode of learning, how it has a ‘feeling for’ judgement. In particular, Erleben preconditions secondary experience (Erfahrung), that is, the consciousness-building processes of connectivity events, coordinated according to the patterns of cognition, and rationalized with the capacities of concepts (Verstand) and principles (Vernuft).

The external connectivity continuum of learning is explained as the systematic unfolding of ‘learning-by-doing’ exercises‚ from observation to hands-on experience during work, as methodic interconnection with the tangible environment, by the father of modern pedagogy, Johann Heinrich Pestalozzi (1746-1827). Learning through his own failed experiments, he moved from theological and technocratic dogmatism towards the appreciation of mathematics as an interpretive approach to human nature, learning patterns from working, (as distinguished from production). This makes his theory accessible and relevant for contemporary enquiries into the conditions of learning best with and about ‘AI’, as humans.

He thus explains, how algorithms, their architecture and operations at work in ‘AI’ should be serving human natural needs. ‘My method does nothing other than reproducing the simple course of nature.’ ... ‘Every sensitive perception deeply imprinted in the human mind triggers a series of secondary notions that come more or less close to this perception...thus bringing together objects whose essence is the same; your understanding of the inner truth of these objects will be expanded, sharpened, and strengthened’. Such education nurtures literacy and sovereignty over the technology.

On the other hand, the internal connectivity of learning is explained by neuro-biology research, under the term of ‘brain-friendly learning’. For ‘AI’ this is relevant because the axiomatic neural network models draw from the conceptual coherence, or aesthetics, the generic connectivity and plasticity of natural models, and motivational orientation from prompts. Hence, there appear significant overlapping heuristics, between the biological and technological models that can be explored for the purpose of clarifying the conceptual relationship and alignment of developmental trajectories, such as human wellbeing.

Proper pedagogy is efficient and effective. It pursues its purpose, namely education, with the least possible collateral damage and the greatest desired developmental impact on the individual. From the perspective of the biology of ‘brain-friendly learning’, pedagogy, with its curricula and institutions, avoids damaging or misguiding interventions, while carefully amplifying natural processes. Notably, amplifying is different from simulation or correlation, because it directly manages intentional causality, without pretending or even the need to theorize. Regarding organic structures, such pedagogy utilizes the mechanism of neuro-plasticity as a property for learning, with the goal, to make best use of education, as a humanistic technology, to handle technologies. On the theoretical level, it avoids closed concepts, speculative ideologies and justification of shame or ‘blunt force of perfection’, as means to the end of education, such as in the measures of body-transformation envisaged by proponents of trans-humanism.

‘Brain-friendly learning’ is not just treating the brain in a friendly manner, but also reversely allows us to be treated friendly by our body, while we strive to advance, connecting all human resources into a natural learning hub. At the very beginning, it activates the ‘joy’-ful (homoeostatic) experience (Erlebnis) in connections with the environment, combining pre-conscious responses to need, and first experiences (Erfahrungen) of causality, power and relatedness.

‘All newborns possess a certain repertoire of behavioral reactions which are activated in the course of, or together with, the activation of the central stress responsive systems when their homeostasis is threatened by cold, hunger, thirst etc. … [The] early recognition of the controllability of a stressor by an own action is one of the earliest associative learning experiences of a child and it has a strong imprinting impact on the developing brain.’

In other words, ‘brain-friendly learning’ cultivates human propensity for joy (Freude), as a positive and powerful educational Leitmotiv. It shapes and manages learning as organic growth, driven by human purpose, not as augmentation with dead matter or coercive molding. It supports life-long learning, by emancipating curiosity from greed, fear or narrow pragmatism. Thereby, it enables healthy attitudes, literacy and genuine competence towards technology.

As in the invention of previous tools, such as the pocket calculator or the laptop computer, ontological confusion and the lure of convenience can challenge a society’s maturity to manage innovation in its best interest. The attention of discourse, market-forces and regulation is often distracted from long-term welfare to short-term promises. Along these lines, foresight and responsibility in innovation and implementation of technologies have not gathered strength and strategy for a reasonable discourse.

As Australian researchers of ‘Education Futures’ recently observed, responding to an MIT-study of early epidemiological data on ‘AI’ in education that warned about significant risks from perceived epistemic frustration and even ‘brain rot’, that, ‘AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in „metacognitive laziness”. However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination’.

This discussion can help to de-dramatize, inform and recalibrate the debate while clearly advocating better understanding of the entire field of practice. It will not remain merely apologetic, when the ‘accomplished tasks’ have a value that justifies such ‘significant engagement’, not as an end in itself but for commonly acceptable reasons. A social perspective moderated by philosophical enquiry can refine the understanding of what ‘AI’ is and ‘does’, in human terms. Connectivity of the relevant disciplines and perhaps even industries and governance bodies can be strengthened, in order to put innovation under legitimate societal control, so as to govern the economy, the justice and the dignity in using and controlling ‘AI’.

The ambiguous language of technology, combined with commercial hype can blur the acute attentiveness, on the side of teachers, to listen to and empathize with students. Students need caring, truthful and helpful guidance, learning when to use their own minds, eyes and hands, and when to use machines. Responsible language has been neglected in the ways we talk about the technology and ourselves, for example when we are confusing play and games, performance and output measure, same and similar. The deepest source for connectivity when making ‘AI’ serve humanity, is to learn from language, namely as embedded deep systems of knowledge, including natural, social, mathematical and biological languages, which all combined contribute to culture.

Going back to simple philosophical language, a scientific attitude to language will make sure that, while ‘I talk about what I know, at the same time, I know what I talk about, and how, considering the purpose’. The coherence of intention and action, theory and practice, name and object cannot be taken for granted, but can be supported from the above-mentioned contributors to culture.

3 Conclusion

What can we learn from this discussion? First, about the attitude that goes together with perception and expectations. This enquiry is inspired not by fear but reasonable hope and idealism. It is based upon the assumption that a proper understanding of ‘AI’ is possible, that is, a holistic, socially meaningful and normatively instructive understanding. Second, about language, as a living body of expressive symbolic meaning, that can connect nature, technology and culture, especially through its mathematical features. Third, about society, as the habitat and laboratory of understanding, not just a market or an object of governance, but a participant in the quest for the understanding that matters. Fourth, about learning, as a constructive, playful extension of rules-knowledge, that embed agency in language and open access for pedagogy to individualize cultivation. Fifth, about technology, as a set of tools, which humans have created without fully anticipating the properties and consequences, and hence need to learn mastering it for human purposes, in particular by allowing time to mature and inspiration for alternative approaches. Sixth, about psychology of self-reliance, joy, empathy and collaboration, to first trust humanity and not attach faces to machines, understanding that, different from reasonable confidence regarding functionality and probability‚ trust cannot apply to machines. Engineers can be trustworthy and trusted, their constructions can, hopefully, be relied upon. Seventh, about pedagogy, we learn to be prepared for the prospective continuum of dealing with known and unknown unknowns, so as to manage risks-related decision-making in increasingly complex environments. Eights, about caring, we understand the need to upgrade our attention and rehabilitate the importance of the ubiquitous human factor. Inter-disciplinary competence is can enable society to make sense and take responsible action regarding ‘AI’. Namely, the nature of the matter must be properly established, normative stakes must be fully described, so that practical judgements regarding specific ‘AI’ issues are well grounded.

To conclude this programmatic brief, what matters most, regarding the meaning of AI in teaching is the following. A thoroughly innovative discussion of technology, learning and language is overdue, involving all cultures. After decades of systemic forgetfulness about the demand for holistic and universal integration of eutrophic and dystopic growth of specific, positive and technical kinds of knowledge, and in particular of institutional monoculture and general ignorance of the role of language in science, the mathematical structure and linguistic substance of AI technologies now become even more tangibly practical, than information- and biotechnologies, which had taken the lead over the past six decades of globalized impact on world economies. In fact, economy is the social area, where science and education meet, in order, ideally, to go hand in hand, or else, to propel alienation and injustice between humanity and our products.

Learning theories hinge upon who we want to be (anthropology), as technology drivers. The best role of AI in teaching depends upon the relationship we define regarding technologies: to use or to serve them, means to first decide whether to dare maturity. Do we trust in our human condition, or feel shame in front of our creations? Thereby, ‘AI’ becomes a matter of morality, knowledge and style.

Literature

Adler, Mortimer J. and Van Doren, Charles. 1972. How to Read a Book. New York: Simon and Schuster.

Anders, Günther. 2018. Die Antiquiertheit des Menschen: Über die Seele im Zeitalter der zweiten industriellen Revolution. München: C.H. Beck.

Ball, Philip. 2011. Unnatural: The Heretical Idea of Making People. London Bodley Head.

Basu, Tanya (December 16, 2021). „The metaverse has a groping problem already“. MIT Technology Review. Retrieved March 12, 2024.

Döring, Ole. 2021: „Menschen und Cyborgs. Versuch einer deutsch-chinesischen Verständigung über das Menschsein 人性“. Armin Grunwald (Hrsg.). Wer bist Du, Mensch? Transformationen menschlichen Selbstverständnisses im technischen Fortschritt. Verlag Herder: 83-109.

Gadamer, Hans-Georg. 1972. „Sprachlichkeit als Bestimmung des hermeneutischen Vollzugs“. In Gadamer, Wahrheit und Methode. Grundzüge einer philosophischen Hermeneutik. (3. Auflage). Tübingen: J.C.B. Mohr (Paul Siebeck): 374-375.

Huddleston, Tom (January 31, 2022). „‘This is creating more loneliness’: The metaverse could be a serious problem for kids, experts say“. CNBC. Retrieved April 2, 2024.

Hüther, Gerald. 2006. „Neurobiological approaches to a better understanding of human nature and human values”. The Journal for Transdisciplinary Research in Southern Africa. Vol 2, No 2, a282. DOI: https://doi.org/10.4102/td.v2i2.282

Huizinga, Johan. 1939/2004 Homo Ludens. Vom Ursprung der Kultur im Spiel. Reinbek Rowohlt.

Jackson, Lauren (February 12, 2022). „Is the Metaverse Just Marketing?“. The New York Times. ISSN 0362-4331. Retrieved March 12, 2024.

Jung, Matthias. 2009. Der bewußte Ausdruck. Anthropologie der Artikulation. Berlin and New York.

Kamper, Dietmar. 1994. „Der eingebildete Mensch“. In: Kamper, Dietmar und Christoph Wulf (Hg.). Anthropologie nach dem Tode des Menschen. Lang: Frankfurt am Main. 273-278.

Kovanovic, Vitomir and Marrone, Rebecca. 2025. „MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated”. The Conversation. DOI https://doi.org/10.64628/AA.w4kawts34 . Retrieved October 18, 2025.

MacDonald, Keza (January 25, 2022). „I’ve seen the metaverse – and I don’t want it“. The Guardian. Retrieved January 18, 2024.

Mesquida, Peri, Pereira, Fabio Inácio and Bernz, Maurício Eduardo. 2017. „The Pestalozzi Method: Mathematics as a Way to the Truth“. Creative Education, Vol.8 No.7.

Newton, Casey (July 22, 2021). „Mark Zuckerberg is betting Facebook’s future on the metaverse“. The Verge. Archived from the original on October 25, 2021. Retrieved March 12, 2024.

Plessner, Helmut, Die Stufen des Organischen und der Mensch. Einleitung in die philosophische Anthropologie, in: G. Dux / O. Marquard / E. Ströker (Hrsg.), Plessner, Helmuth: Gesammelte Schriften, Band IV. Frankfurt a. M. 1981

Plessner, Helmut 1970. Lachen und Weinen. In H. Plessner (Hrsg.), Philosophische Anthropologie. 11-171. Frankfurt: Fischer.

Ritterbusch, George David; Teichmann, Malte Rolf et.al. (February 9, 2023). „Defining the Metaverse: A Systematic Literature Review“. IEEE Access. 11: 12368–12377. Bibcode:2023IEEEA..1112368R. doi:10.1109/ACCESS.2023.3241809.

Schiff, Daniel S., Bewersdorff, Arne and Hornberger, Marie. 2025. „AI literacy: What it is, what it isn’t, who needs it and why it’s hard to define”. https://doi.org/10.64628/AAI.t3jn7atq9. Retrieved November 10, 2025.

Schneider, Hans Julius. 2009. „Transposition – Übersetzung – Übertragung“. Birk, Elisabeth; Schneider, Jan Georg (Hg.). Philosophie der Schrift. Tübingen: 145-159.

Simmel, Georg. 1998. „Der Begriff und die Tragödie der Kultur“ (1923). In Georg Simmel, Philosophische Kultur, Berlin: 195-219.

Shou, Darren. „I Want My Daughter to Live in a Better Metaverse“. Wired. ISSN 1059-1028. Archived from the original on September 10, 2021. Retrieved April 8, 2025.

Snowden, David J. and Boone, Mary E. 2007. „A leader’s framework for decision making”. Harvard Business Review. 85 (11): 68.

Soetard, Michel. 1981. Pestalozzi or the Birth of the Educator (Pestalozzi ou la naissance de l’éducateur). Peter Lang: Bern.

Wimmer, Michael. 2014. „Antihumanismus, Transhumanismus, Posthumanismus: Bildung nach ihrem Ende“. In: Kluge, Sven; Steffens, Gerd; Lohmann, Ingrid [Hrsg.]. Menschenverbesserung - Transhumanismus. Lang: Frankfurt am Main. 237-265.

Yang, J., Xie, W. and Ni, J. 2025. „A framework for AI ethics literacy: development, validation, and its role in fostering students’ self-rated learning competence”. Sci Rep 15, 38030 (2025). https://doi.org/10.1038/s41598-025-21977-5. Retrieved November 8, 2025.