AI on the curriculum?
Can System dynamics cross disciplinary boundaries and inform us about the dangers of AI in schools?
Read the Norwegian version of the article:
Europol, the EU's joint police and judicial body, writes in its first report on the consequences of artificial intelligence that calculations estimate that as early as 2026, 90% of the content on the internet may be synthetic, i.e. wholly or partly produced or manipulated by AI (European Union Agency for Law Enforcement Cooperation., 2022).
The report thus draws up dizzying, not to say terrifying, horizons. How will it be possible in the future to verify what is actually true - if everything looks true, but no sources can be verified and traced back?
This question is also very relevant in school, where being able to distinguish between truth and lies, but also between the truthful and less truthful, is one of the most important, but also the most difficult, things we teach students. After a few years of AI production of deepfakes and enormous amounts of text, it may prove impossible.
In the AI environment, we see a distinction between the desire to prioritize security, versus the willingness to take risks. The safety-oriented AI researchers believe that the risk with AI is too high, it is better not to. But security orientation never wins in capitalist realism. The companies that take risks can raise a lot of venture capital, which gives them funds for research - meanwhile, the security-oriented initiatives crumble away.
We see this phenomenon in field after field: The companies that take risks set the agenda, and drag the rest of society with them, whether it is for the good or not. The common denominator here is capital.
This may seem like a political claim, and it probably is, but it is no less political than the position that capitalism is the only possible social order, which I will soon write about.
An important question that almost no one asks is the following: Why do so many, and indeed all politicians and school researchers, feel that AI MUST be used in schools just because it exists?
Finnish schools are widely known for their good results, where they have consistently come out the best of the Nordic countries in international tests. School researcher Pasi Sahlberg more than suggests that the main reason is conservatism within the Finnish school system in the face of external pressure. He says that the Finnish school has topped the PISA survey since 2001 "without admitting the six-year-olds to school, introducing national tests or following up on the OECD's advice to make the school more efficient" ("Stabil suksess", 2023). Although the quote does not prove anything in itself, one can imagine that calmness, autonomy, and patience are virtues that make the school suitable for cultivating deep learning in an age where the world moves ever faster.
Why then can it seem as if the willingness to change and the ability to incorporate new technology are the most important purposes of the school?
The playbook of surveillance capitalism
The relevant parallel experiences of ChatGPT that we have from classrooms from before, such as the introduction of screens and computers, call for some caution in the face of AI since the positive effects of screen use in school are highly debated.
Ultimately, the question becomes what the school is for, and how the school authorities see the students: Is the school a place where the students should learn, and where we as a society protect them from market interests, or are the students users who will serve the interests of Big Tech?
Shoshana Zuboff writes in The Age of Surveillance Capitalism (2019) that we who use Big Tech's digital products are not subjects in the eyes of the companies, but “means to their goals”.
Naomi Klein (2023) calls selling disruptive technology without safety margins and ethically sustainable reflections for Silicon Valley's "playbook". She refers, among other things, to how Google has photographed people's homes without their prior approval, but also scanned enormous amounts of books and art without payment.
The implementation of AI technology such as ChatGPT is an attempt to implement yet another digital paradigm shift before the population understands what is going on.
The difference is that this time it is not people's external the tech companies want to capitalize on, such as homes, and books, but our inner self.
Our way of thinking and being creative is up for grabs. The developers of AI have stolen humanity's shared knowledge and art, and want to use it for their gain, at the expense of, for example, artists and illustrators, who will thus become redundant, because the algorithm uses the artists' work to generate new "art". Klein believes the good words about everything AI will accomplish for humans are "the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history" (2023).
According to Shoshanna Zuboff, Silicon Valley's business method is to quickly implement, and create demand for digital products, and thereby gain ground for technological innovations at the forefront of legislation and policy development.
When you see how tech companies have grown since the turn of the millennium, you understand how immensely lucrative this business model is. This is how Zuboff strikingly describes the dynamics of the business model:
We are no longer the subjects of value realization. Nor are we, as some have insisted, the "product" of Google's sales. Instead, we are the objects from which raw materials are extracted and expropriated for Google's prediction factories. Predictions about our behavior are Google's products, and they are sold to its actual customers but not to us. (Zuboff, 2019, p. 94)
Zuboff demonstrates how adopting new technology ahead of legislation and policymaking was a conscious strategy of Google founders Larry Page and Sergei Brin. In that way, they established a technological status quo the politicians just had to adapt to.
Zuboff quotes from a case in the magazine Business Insider, where another former director of Google, Eric Scmidt, is interviewed.
When asked about government regulation, Schmidt said that technology moves so fast that governments really shouldn't try to regulate it because it will change too fast, and any problem will be solved by technology. "We'll move much faster than any government". (Zuboff, 2019, pp. 104–105)
Google is a company that, like other huge companies such as Meta (Facebook and Instagram), Apple, Microsoft, etc., is used a lot in schools. These companies thus make a lot of money from students' attention - their personal data. Microsoft sells the Office package to many municipalities in Norway and other countries, while many teachers use Facebook to create class groups, and primary schools tend to use iPads rather than books. Currently, there is very little awareness in the school system of the vested interest these companies have in being so clearly present in education.
After all, they do not primarily sell computers. The hardware is only a bridgehead to the user data. Our youngsters are then shaped to accept the companies' technology, ideology, and worldview.
Office products such as the Office suite and Google Workspace turn pupils in primary schools, who most dutifully carry out their school work in Word, Excel, and OneNote, into aspiring office workers who will secure the giant companies' business model for yet another new generation.
In the Norwegian Official Report “Your privacy is our joint responsibility” this is explained as follows:
Through school, the children get used to using a particular operating system and user interface and the threshold for switching services becomes high. In this way, offering very cheap services to schools and kindergartens is an investment that can yield significant gains for the companies in a longer perspective (NOU, 2022, p. 138).
When pupils create personal users on OpenAI the school has yet another player on the field that collects and capitalizes on the students' user data. If the school allows AI to write, collect facts, make presentations for students, and so on, we can imagine that the technology will be integrated into their thinking and production abilities to a degree that will eventually render them helpless without it.
Is it even possible to imagine a more effective way to make your product indispensable to the population?
When we also know how expensive the technology is to operate (data processing, databases, energy, etc.), it seems unlikely that OpenAI provides ChatGPT for free to be nice to us. Why is ChatGPT still free for users?
Naomi Klein writes how Silicon Valley's model is to launch its products with a paradigmatic effect. Users are bewitched by the "magic" the products can achieve, and very soon they are hooked. Now is the time for investors to get paid for their investment:
Then watch as people get hooked using these free tools and your competitors declare bankruptcy. Once the field is clear, introduce the targeted ads, the constant surveillance, the police and military contracts, the black-box data sales and the escalating subscription fees (Klein, 2023).
At the same time, the users of the bot work to train it. Although the data processing is expensive, we do the rough work for free for huge companies that raise money from investors and so-called venture capitalists. It was only when the human response was integrated into the GPT technology that development took off.
In AI schools, we therefore take part in this joint effort for the wealthy.
Systems Dynamics applied to school development
To get behind the usual logic in the debate about AI in schools, where techno-optimists are for it and techno-pessimists are against it, we should adopt a wider perspective.
We should see the school as a system that is part of systems integrated with other systems.
Why? Because all of life moves in concentric circles.
I am characterized by the social order. How does it affect me that we are experiencing an energy crisis, depletion of freshwater resources, and accelerating climate change, where heat waves, sea level rise, and uncontrollable forest fires are just some of the consequences? On top of all this comes war and increasing geo-rivalry.
In the midst of all this, we keep hearing that the school must educate students who are equipped for tomorrow's working life. But what does tomorrow's working life require? Yes, many believe it is the ability to retrain quickly. Others believe it will require good tech expertise. Others believe that what we will need are warm hands.
However, no one is asking the question of whether we will have the resources at all to maintain a working life that is anywhere near as profitable and stable as what we have today.
What is working life, if not a system woven into and surrounded by the rest of social life? And what is social life, other than a part of physical life, as shaped by the laws of nature? And what are the laws of nature, other than a part of both nature and what we call culture?
This is how systems thinking goes outward in spirals, where everything is connected to everything. Given the prospect of collapse that I described above, which we are constantly seeing empirical evidence of is underway, are there perhaps other competencies that schools should give students than to - to put it polemically - compete for grades?
In this way of thinking, I am inspired by Limits to Growth (LtG). LtG used computer simulation to calculate how five aspects of society (agriculture, pollution, population growth, industrialization, and use of non-renewable resources) would, by mathematical necessity, break the planet's limits, since these factors increased exponentially, while the planet's resources are by definition finite.
LtG created a lot of fuss, it received praise, but also a lot of criticism - but as the essence of the predictions the report made, especially the so-called standard scenario, is confirmed by empirical observations, the book appears more and more terrifying and visionary (Herrington, 2021).
Systems theory is a huge field of theory that I am nowhere close to having an overview of. There is a wealth of sub-theories, each of which has different ways of organizing systems and exchanges between them. What is important to note, however, is that systems theory is by definition interdisciplinary. In that way, systems theory can incorporate, process, and weave together the entire range of topics between the human, social, technological, and natural.
The development in thinking about systems theory has oscillated between universal and holistic models to specialized models, for example about behavior (behaviorism), natural systems (ecology), and computer programming (cybernetics).
The biologist Ludwig von Bertalanffy developed the General Systems Theory, where so-called "open" systems were discovered and explored with mathematical principles (Korn, 2023). Bertalanffy's thinking was "metabolic", in that he looked at how transactions between systems and environments simultaneously affected themselves and the whole (Van Assche et al., 2019). Within the social sciences, there are historical forerunners, among others Herbert Spencer and Émile Durkheim, who studied the interaction between individuals and society by examining how systems influenced each other (Gibson, 2023).
I am interested in systems theory because I was fascinated with Limits to Growth. Perhaps the fascination is also about how the authors dared to use scientific insights to say something clear about politics and society.
Much positive has happened since 1972, when the report came out, but it is also clear that the earth's resources have been greatly reduced, as the report predicted. Capital has also increasingly shifted to extracting more intangible resources, such as people's time and attention, as Naomi Klein and Soshanna Zuboff demonstrate.
Can a humanities-adapted variant of the systems model be used to investigate how and why the school system responded as it did to ChatGPT?
A humanistic and at the same time system dynamic approach to school and AI cannot process data as is done in System Dynamics. It is rather the theory's general insights, and some of the central concepts, that should be used as a basis for the analysis.
Donella Meadows, one of the main authors of LtG, believes that a more popular scientific, possibly humanistic, use of the theory is important for it to have an impact on society and politics.
Even the simplest ideas of system dynamics—stocks, flows, positive and negative feedback, the effect of delays, and the importance of nonlinearity—can help to clarify public discussion and improve public policy. They must be communicated without jargon, without mathematics, without loop diagrams (Meadows, 1989).
Some of the concepts Meadows mentions here seem very relevant to an analysis of the school system's involvement in Big Tech.
The term feedback loop can be used to understand how private technology companies have gained increasing influence in schools, first by implementing hardware (screens), which can then be fed with cascades of software, until AI can now act as the student's personal assistant and replace the teacher.
Nonlinearity is a core concept within the development of AI, where it is precisely the exponential growth of superintelligence that is completely impossible to foresee the consequences of.
Delays are used in LtG to show how it is often too late to do something about a problem when you see the results it brings because when you break a finite limit, it can still be some time before the effects show themselves in all their horror like the case is with climate change.
The same delay mechanism applies in schools, where the results always come after practice so that PIRLS- and PISA tests cannot do anything about what has happened if a whole generation of pupils has developed weak writing reading, or even thinking skills.
Abrupt changes within the school increase the likelihood of effects that we cannot foresee, but which we will therefore not become aware of until after the effects have occurred.
Stock is the system's content at all times – in the case of the school system: students, tools, buildings, authorities, employees, and so on – while the flow is input and output, where equal input and output gives equilibrium, and is thus sustainable.
Donella Meadows has popularized systems theory to elaborate for a wider audience how systems intertwine in all areas of life.
In the book Thinking in Systems: a primer (2008), Meadows describes how world society, starting with the Industrial Revolution, has cultivated a reductionist approach to problems through analysis.
The modus operandi of the industrial and scientific community is to take a problem out of context and try to solve it separately.
Meadows believes that such analytical thinking has removed us from an intuitive and holistic way of understanding our surroundings, which is necessary in addition to the analytical way of thinking. When we solve a separate problem, there is not necessarily a balance in the system, since we often may have moved the problem to another location.1
Loose problems are symptoms of an imbalance within the system, and therefore cannot be eliminated in themselves: the problem is part of the way the system works.
According to Meadows (2008), any system consists of three aspects: elements, internal relationships, and purpose. The elements are the easiest to observe about the system since they are often physical objects, but they can also be, for example, values.
It is when you see the connections between the elements that the system stands out. In school, this is easy to see: Public school documents, teachers, and students work together, like water rising from the roots of a tree and up into the leaves that glisten in the sun.
Systems exist on an infinite number of levels, where all systems are part of other systems: a cell, an organ, a human being, a citizen, an organization, and so on.
For a system to be in balance, the elements must be in a sustainable relationship with each other. For example, the system's purpose cannot be to sell an infinite amount of artificial fertilizer, if there is not an infinite amount of phosphorus - in which case the system is doomed to collapse.
The purpose is thus the salient point - but the purpose is often difficult to define, verbalize, and understand, partly because systems that are closely connected (systems within systems) can have different purposes - the teacher may be concerned with knowledge, while the student just wants to get through the school day, while the headmaster wants good figures for the school owner.
The school is a system within the larger social system, where public authorities and global companies are other systems that interfere with the school system.
How does the school's purpose compare to the purpose of OpenAI, or the Ministry of Finance's purpose of figures that confirm economic growth? Is there a mismatch, which creates an imbalance in the school system?
Here we are faced with the classic question of the school's purpose: Is it classical formation or education?
The conflict between these similar, yet very different, aspects of the school has been an integral part of the school system since the beginning. But where formation is an intangible cultural dimension, education is strongly linked to analytical rationality.
The dichotomy of formation versus education is reminiscent of the distinction between wisdom and knowledge.2 Wisdom is understood in that context as a sustainable way of thinking, where short-term goals are permuted if they are in conflict with long-term goals.
An indigenous myth tells how the forest's predators handed the key to the future to man after they had learned to master fire. At the handover, the animals said that now that man had become the most powerful animal, he had also the responsibility of ensuring that all life lived well.
That kind of wisdom, which often results in resistance to welcoming and implementing new technology, is still firmly rooted in some indigenous cultures. Among other things, Maori in New Zealand have objected to OpenAI using their language in GPT: "Data is like our land and natural resources" (Chandran, 2023).
In this perspective, knowledge is the opposite of wisdom. Knowledge is an instrument for achieving short-term goals such as material wealth and power. If we view education and training through this lens, a more formative approach becomes even more crucial in the uncertain situation that today’s children and youth face.
Those who are young now are the first generation who will most likely have no choice.They will have to learn to act sensibly, with long-term goals in mind, whether they want to or not.
Capitalist realism
The systems that are particularly relevant when trying to understand the influx of AI in schools are capitalism, represented by global technology companies, and the school system itself, which I have already tried to circle in.
The cultural critic Mark Fisher is the originator of the concept of capitalist realism (2023). He claims that the ultimate expression of the phenomenon is captured by Margaret Thatcher's heroically famous slogan: "There is no alternative" (TINA).
Capitalist realism implies as Fisher explains it, that even if capitalists are also fully aware of the dark sides of capitalism, liberal democratic capitalism is still the only alternative because all other models have failed.
In capitalism, the chief’s supervision of the old industrial worker has moved into the worker himself, who must take care to fulfill sub-goals and motivate himself to perform more than satisfactorily - after all, no one wants to be the one who does not offer himself.
In this neoliberal capitalism, flexibility is the watchword.
Does that sound familiar? Yes, this is a teacher's (but also a student's) everyday life. Is it at all possible to be more flexible than using the brand new technology of LLMs in teaching only days and weeks after it saw the public light?
We have allowed ourselves to be programmed by capitalist and neoliberal thinking to the extent that we no longer understand it ourselves. It is thus unthinkable to question technological advances or bureaucratic impositions, according to Fischer.
That does not mean one cannot discuss them, but to reject them as such is impossible.
Criticism is an integrated mode of thought in capitalism, which we see when Mark Zuckerberg has to attend hearings in the US Senate, or when fossil fuel companies make advertisements with pictures of wind turbines and call themselves energy companies. Such criticism is a safety valve that prevents real political action. Thus the criticism maintains the illusion that this form of society is the only correct one.
This way of thinking has many implications in the school system, for example in the core element of "Critical approach to text", where students and teachers are trained in superficial and formulaic criticism, which precisely ensures that nothing changes.
Such a formulaic way of being critical is institutionalized in school to the extent that it would not have been a surprise, if right here and now there are hundreds of thousands of school students around the globe working on the writing task: Write a text in which you discuss the advantages and disadvantages of ChatGPT.
As a natural consequence of (skin) criticism in academia, AI literacy is a theoretical field in explosive development.
Competencies we will develop in AI literacy are understanding the algorithms, the database, the social consequences of AI, furthermore good communication with the help of AI, and so on.
What should not be problematized, however, is the fact that it is completely impossible to understand the algorithms, the database, and so on - because not even the technicians in OpenAI understand the algorithms - they develop themselves!
Capitalist realism is, according to Fischer:
an all-embracing atmosphere, which not only affects cultural production, but also the regulation of work and education, a kind of invisible barrier that puts a limit on thinking and action (Fisher, 2023, p. 47). (NOTE: I read the Norwegian translation og Fischer, and have translated back into english: Quotes might not be accurate.")
A central aspect of capitalist realism is that the neoliberal capitalist way of thinking has become naturalized in citizens to the extent that the ideology appears as a natural law. People no longer know that it’s ideology and norms.
Here it is appropriate to recall how the Norwegian Minister of Education, just a few weeks after ChatGPT was introduced to the public, went out in the countries biggest newspaper and proclaimed that "we must embrace new technology" (Falk, 2023b).
The credo is so incorporated into capitalism realism that it appears as a law of nature.
Fisher writes that capital is by definition not sustainable.
Capital's 'need for a continuously expanding market', its 'fetishization of growth', means that capital is inherently opposed to sustainability" (Fisher, 2023, p. 52).
He uses environmental problems as a prism to explain how capitalism carries with it an inherent collapse. Within the prevailing hegemony, it is only possible to understand the climate disaster as a simulacrum, shadow image, or poster, because taking in the actual implications of where we are going is too traumatizing.
Climate negotiations and demonstrations thus become a kind of bread and circus that ensures the status quo. We see the same absurd logic play out when Sam Altman simultaneously launches GPT4 and warns about the enormous damage the technology can cause, for example in terms of disinformation.
The way Altman and other key AI figures communicate also illustrates another of Fisher's points about capitalist realism: It has no responsible core.
Altman is not responsible for things going bad, he just works for his shareholders, and on the other hand, if OpenAI does not develop AI, then someone (bader) will.
Fisher uses an example from Campbell Jones to illustrate what the social consequence of capital's pulverization of responsibility is, namely recycling of garbage. When everyone has a responsibility to recycle, as if this is a question that is above politics and ideology, it becomes obvious at the same time that it is an ideological position.
But the subject who is supposed to recycle, Jones argued, presupposes the existence of a structure that is not supposed to recycle: By making recycling "everyone's" responsibility, the structure shifts the responsibility onto consumers and makes itself invisible. (Fisher, 2023, p. 138).
We find the same ideology in the way teachers, leaders, and authorities relate to mobiles, screens, and, more recently, ChatGPT: It is constantly repeated that the teacher must find good, safe, and appropriate ways to use these technological tools.
In other words, the system is based on the individual having to solve challenges introduced by worldwide companies. Is that even possible?
Capitalist realism pours cascades of disruptive technology upon us, and as long as we continue to cultivate the idea that the individual is the one sacred entity, they get what they want.
All hands on deck
As long as we do not see the underlying ideology driving the changes in the school system, it will be impossible to resist Big Tech. Any victory, for example, a decision against the use of AI in a single school, is only a temporary setback for capital.
If we do not see the driving forces behind all the small and large changes in the school system, which all turn towards more use of digital systems, we are helplessly left to scratch at the surface, and whatever we do, it is to the benefit of capital.
As Fisher writes:
Nothing is political in itself. Politicization requires political agency capable of transforming what is taken for granted into something that can be acted upon" (Fisher, 2023, p. 159).
I believe that systems theory, as described earlier in his article, can be an appropriate and useful tool to transform Fisher's critical insights into an analytical tool.
As Meadows writes:
A change in purpose changes a system profoundly, even if every element and interconnection remains the same" (Meadows & Wright, 2008, p. 17).
It is my contention that the school authorities, in advocating the use of LLMs in schools the moment they are launched, have lost sight of the deep purpose of the school system. Rather than seeing what is the school system's original purpose of education, truth, and social mobility, the aim is to be adapted to technological innovations, created by the world's largest companies. The saying goes that we have no choice.
But systems theory shows that elements, the connection between them, and the purpose of systems form intricate patterns that can be displaced. Systems can collapse under pressure from other systems, or exceed their own limits and thus become something other than we think they are. Therefore we have to fight back.
What will diseappear when new digital technology is introduced on a large scale in schools at an accelerating speed?
The fear is that LLMs will be the final battle, after having had to cope with screens and mobiles for some years.
With LLMs, the school can lose its ability to be a community-supporting institution, a place for social equalization, the search for truth and education - even if (because?) this cannot always be measured in instrumental numbers.
References:
Chandran, R. (2023, april 10). Indigenous groups fear culture distortion as AI learns their languages | The Japan Times. https://www.japantimes.co.jp/news/2023/04/10/world/indigenous-language-ai-colonization-worries/
European Union Agency for Law Enforcement Cooperation. (2022). Facing reality?: Law enforcement and the challenge of deepfakes: An observatory report from the Europol innovation lab. Publications Office. https://data.europa.eu/doi/10.2813/08370
Falk, J. (2023b, januar 15). Kunnskapsministeren etter robot-stunt: – Må omfavne ny teknologi. https://www.vg.no/i/69BVgO
Fisher, M. (2023). Den kapitalistiske realismen (A. J. Schnell & D. Vernegg, Overs.). H//O//F.
Gibson, B. (2023). Systems theory | sociology | Britannica. https://www.britannica.com/topic/systems-theory
Herrington, G. (2021). Update to limits to growth: Comparing the World3 model with empirical data. Journal of Industrial Ecology, 25(3), 614–626. https://doi.org/10.1111/jiec.13084
Korn, K. C. (2023). General systems theory. https://managingresearchlibrary.org/glossary/general-systems-theory
Meadows, D. H. (1989). System dynamics meets the press. System Dynamics Review, 5(1), 69–80. https://doi.org/10.1002/sdr.4260050106
Meadows, D. H., & Wright, D. (2008). Thinking in systems: A primer. Chelsea Green Pub.
Stabil suksess. (2023). Utdanning, 6, 8–15.
Van Assche, K., Valentinov, V., & Verschraegen, G. (2019). Ludwig von Bertalanffy and his enduring relevance: Celebrating 50 years General System Theory. Systems Research and Behavioral Science, 36(3), 251–254. https://doi.org/10.1002/sres.2589
Since the Industrial Revolution, there has been a consistent mismatch between exponential growth and finite limits in many arenas. If we reduce emissions of greenhouse gases using solar panels, wind turbines, and battery technology, we will likely run out of rare earth metals and other limited resources.
David Bohm worked on understanding the relationship between wholeness and fragmentation, where holistic thinking involves a time horizon that is much longer than that which we modern people usually operate with, a long-term way of thinking that is traditionally associated with wisdom. Thinking holistically involves e.g. that you can imagine that the world you live in will grow into eternity and last - a thought very few can imagine today. Bohm was a bridge builder between Eastern philosophy and Western rationality, with a unique platform for his thinking, as he stood with one leg in quantum physics and one in philosophy. Bohm's distinction between wholeness and fragmentation is similar to the one we see in the terms wisdom and knowledge.