Welcome to Emerging Futures -- Volume 139! Life, Intelligence and Artificial Intelligence...
Good morning precarious adaptive beings in meaningful non-computational becoming,
In our entangled ecosystems of ongoing daily life, we are in holiday mode this week. It is a brief break for us. Starting Wednesday Jason is painting his kitchen and Iain has borrowed a van—really, an old compact school bus that was converted into a basic camper—to head up into the White mountains to scramble with rocky rain clouds.
We are writing from a rocky beach in Connecticut as the rain clouds begin to move in and edge out the sun. Waves lapping, the buzz of insects, birds and the occasional car. Sea salt in the air and a power station at the edge of the horizon. Soon we will be off to find some oysters, but for now it is a good place to consider life, intelligence, and artificial intelligence.
This week we are moving into the second part of our series on AI.
Last week, we introduced the topic by laying out six “disclaimers”:
With each of these disclaimers, we offered a brief introductory argument and a series of links to our and others work on these topics. And as we get further into this series, we might come back to some of these points to go deeper, and others we will skip. Given this, we encourage you to go back to Volume 138 and follow these links, as they offer a rich set of resources.
This week we want to offer a brief meditation on intelligence and the inherent relation of intelligence to creativity.
As we made clear last week, Artificial Intelligence has nothing to do with intelligence. This is not a criticism of Artificial Intelligence; for starters, let’s just say that it would be helpful to have quite a different name: “Algorithmic Mimicry,” for example. This would be both more accurate and far less confusing. This is an argument that Johannes Jaeger makes in his excellent article, Artificial Intelligence is Algorithmic Mimicry.
But it is worth reflecting deeper on the topic of intelligence in relation to AI—why? Given how many leading AI researchers are making some variation of the claim that AI is, or soon will be, smarter/more intelligent than humans , and if we are right that Artificial Intelligence has nothing to do with intelligence, then there is a serious misunderstanding of intelligence in wide circulation.
And this misunderstanding can already be seen in the Turing Test, first devised in 1950 by Alan Turing (originally called “the imitation game). The Turing Test was designed to test if an artificial system was actually “intelligent”. In this game, a conversation is held between a judge and two hidden participants—one an artificial system and the other a person. And if the judge cannot distinguish between the two, then the machine is said to be intelligent—e.g. “the machine can think”.
But why is the manipulation of linguistic symbols exemplary of and ultimately defines intelligence?
A major aspect of the problem is that there is a common understanding that our thinking/intelligence are computational.
Computer systems are purely symbolic systems. Putting it simply, this means that they operate on bits of predetermined and pre-specified information. And this logic has been used to model our thinking as being one of processing signals supplied by the senses into representational mental content that we then manipulate. And as we shall see, this is decidedly not what we do.
But how does any of this relate to intelligence? How do things even become symbols? How do things become something abstract that we can manipulate (if that is what we actually do)?
The first wave of AI, dating back to the 1950’s, tried to solve this problem of how we manipulate representations of reality by attempting to build complete models of an external reality and all possible interactions inside the computer that would allow it to respond to any circumstance. This confused the map and the living dynamic territory, making for an impossible situation because one could never pre-specify all possible emergent circumstances. By the 1970s, this was seen as a dead end.
Second wave AI—developing in the eighties with research into artificial neural networks and becoming our present day AI of machine learning (ML) models—avoids the original problem by assuming intelligence to be about pattern recognition and correlation discovery in massive amounts of data. Now the problem of intelligence was not modeling and abstractly manipulating all states of reality but finding relevant patterns and responses in massive amounts of data.
But here in second wave AI, the problem is not all that different. It begins with the needs of a pure symbol system. It is important to recognize that “data” is not a neutral, pre-existent quality of reality. Data is an achievement – an outcome of significant human activity. To recognize something as a point of data called a “door” for example, is not first and foremost an act of simple sense-data interpretation. It is a profound and profoundly engaged iterative set of actions, mainly tacit and all deeply embodied that gives rise to a way of being alive in which a “door” has a richness and relevance that exceeds any easy definition.
Thus, with Artificial Intelligence, the achievements of intelligence (sense-making) are now taken as the basis of intelligence. And in doing so, we have lost touch entirely with the practice of meaning making itself.
Today, to implement such a vision of AI, we have highly supervised learning situations in which huge numbers of highly exploited humans label the training data in advance as meaning something such that these machine learning systems can find relevant patterns.
In doing so, these systems then encode biases, which is always inherent in data; after all, it is the outcome of relevance in a particular way of life. Now, machine learning models pick out existing histories, norms, and stereotypes.
But, what is the crux of the problem with computational models inherent in the base logic of AI? Edwin Hutchins says it best:
“The physical-symbol system architecture is not a model of individual cognition. It is a model of the operation of a sociocultural system from which the human actor has been removed.”
We are confusing computation for embodied sense-making which is the basis of thinking and intelligence.
What are we trying to make sense of when we wish to understand “intelligence”?
We need to step back from high level forms of sentience and intelligence. We are trying to come to terms with basic sentience. But sentience is not a program running inside a system. Sentience is what a living being does to be alive – it is both the form and content of being a living being.
To be alive is to be a precarious autonomous agent that cares—it cares because it is precarious—it might not make it—why?—because its life is not just given.
A program on the other hand—a large language model, for example—is just a routine; it just runs or it does not. But there is no care; there is no “it” that cares. Sure, the person who coded the routine might care if something goes wrong, but it does not.
To be minimally sentient—minimally intelligent—is to be non neutral—to care for the absolutely real risks of its engagement with its surrounding environment.
A living being cares because its life is not just given; it needs to adapt, as it is changed by its environment, it needs to change its environment. This adapting both internally and externally to a changing world is both sentience (a caring response) and creativity (ultimately not a rote response). This loop of co-making in a precarious and non-pre-given circumstance is “enaction” – making a path in walking. Intelligence is making a path in walking that keeps one alive. Being alive is always, even in its most minimal forms, both being non-neutral – caring, and being creative – world and meaning making.
At the core of this creative activity is action, monitoring, and respons— a co-shaping of self and environment. This is sense-making/intelligence. Here a living being is not parsing predetermined objective data, for no such thing exists for a living being. Meaning or relevance is what must be created in unique action. This action is sensed, monitored, and felt for novel potentials, and it is only as these are stabilized as some “thing” do they become stabilized as something that could be considered "data.” We must remember that before something “is” an identity, a data point, it is only a possibility: What can it do?
This active feeling of working with “what can it do?” is only possible because of its non-indifference (care). A living being can only feel because it is non-indifferent (it cares). A living being is a creative being because it is non-indifferent to its precarious circumstances.
Intelligence is sense-making. And here sense-making must be understood in its most literal sense: in changing/making the self and environment, a precarious autonomous living agent is giving rise to sense. Sense does not happen as a reflection on action; it is the action. Intelligence is a practice of making, and making is always a making of sense.
The problem is that AI wants us to start with the late representational and symbolic achievements of intelligence (sense-making). And to the degree we acquiesce to this false starting place, we remake our world to fit what AI can do.
We can see this both with Large Language Models (LLMs) and driverless cars; we have made a world that begins to be preconfigured for computational systems, with clear states, relevant rules, and delimited conditions. But as William James put it so well, now so long ago, “What really exists is not things made but things in the making.”
As our lives are lived as sense-making, sense creating beings, almost all of life is necessarily under-defined, relational and context sensitive. We bring ourselves and a world forth in action, coming into novel qualities, capacities and as yet undetermined possibilities. None of this can actually be fully prespecified. But we are now busily making a world that fits AI—that can be pre-specified—that can be computed as a symbol system. The types of physical environments we design and the cognitive environments are all suited to how it parses data. We are becoming our misunderstanding...
“Cognition [intelligence] is not about transposing a world of predefined significance into the inside of an agent. It is about agents moving within the world and singly or collectively changing it in ways that are significant according to the forms of life they enact” (Di Paolo, Cuffari, De Jaeger).
Now there is nothing to say that an artificial system cannot do this—but not machine learning models (AI).
We need to put Turing aside and the absurd pronouncements of AI being or almost becoming sentient. It is interesting, but none of it has anything to do with sentience. Intelligence is elsewhere. And critically for us, it is inherently creative elsewhere.
Well, this newsletter started on the beach, then detours happened—Walmart to fix the van—still not quite fixed—not even sure what fixed might mean beyond “satisficing"—it gets us further down the road. And down that road are some great oysters and meadows to camp. Now in that meadow, the night brings the cold dampness of the ocean, and it is time for sleep.
Have a great week actively poised precariously in and of “things in the making” that exceed easy definition.
Till next volume 140,
Keep Your Difference Alive!
Jason and Iain
Emergent Futures Lab
+++
📈 P.S.: If this newsletter adds value to your work, would you take a moment to write some kind words?
❤️ P.P.S.: And / or share it with your network!
🏞 P.P.P.S.: This week's drawings in Hi-Resolution
📚 P.P.P.P.S.: Go deeper - Check out our book which is getting great feedback like this: