Welcome to Emerging Futures -- Volume 140! All Creativity is Artificial: On the Other Side of AI and Creativity...
Good Morning Extended Artifices and Artificers,
We are back from our brief sojourn into the mountains and the work of painting kitchens.
Scrambling along the high ridges of the White Mountains gave us a lot of time to not think as abstractly as usual and time to think more closely with our movements in relation to the immediate environment of affordances—that space of ongoing action that lives in the middle between the body and the rocks.
This is week three of our series on Artificial Intelligence and Creativity.
Last week, we made a brief argument on why Artificial Intelligence is not “intelligence” and why this is relevant to creativity.
Next week will be our last newsletter on AI for the next while, and so we have a few questions for you:
You can write to us via email a detailed response or just a brief note to schedule a conversation; however you wish to engage, we would really love to hear from you (and a big thanks to those who have already responded in various ways).
“To understand a thing rightly we need to see it both out of its environment, and in it, and to have acquaintance with the whole range of its variants.” (William James)
It would be helpful to say a bit more about how AI works as a process (now that we can see that it does not work the way cognition works, e.g. it is not “intelligent”).
AI is not some neutral movement in a progressive history of technology ascending to a predetermined telos of universal intelligence. It is, as everything is, emerging in and of a very particular context. Thus, it is neither accurate nor useful to talk about AI as if it had no context. Context is not some supplementary feature to reality; things are relational, and they are the outcome of all that they can enter into relation with.
Our current context, in broad terms, is a context of ever widening inequalities and environmental crises. We bring this up not to define AI nor to categorize it—or to demonize our present. Rather, as Deleuze put it: “it is not a question of worrying or hoping for the best, but finding new weapons.”
Given the context, what creative alternative practices can we—or might we—activate?
We have shifted from living in a world of stable identities to living in a world of dynamics and modulations. Things flow, and we flow. It is a world where every action generates massive amounts of data. From our phones tracking our movements, actions, and engagements to the movements of speculative capital. Nearly every object today, from cars to watches to doorbells to waste streams to bank cards to ocean currents, is producing ever more continuous dynamic data. And all of this is being utilized in predictive ways to modulate the behavior of systems. And in this, individuality also becomes a dynamic distributed network of data points within and across other distributed dynamic systems:
Critically, prediction is not the end goal; the goal is the modulation of flows—flows in and of bodies, flows in and of systems, from genes, to collectives to markets. Our present is a new creativity in modulations of materials and attention towards highly diverse ends (as long as they can be defined in economic terms).
And in this context, what of big data and AI? Catherine D'Ignazio, in her important new book “Counting Femicide: Data Feminism in Action,” frames the question of AI and data in this manner:
“In the last few decades, mainstream Western society has developed a shocking degree of faith in the power of technology to solve problems. In the face of neoliberal austerity, technology appears to be a cost-saving fix. Governments are adopting automated systems to allocate social services, to determine who gets a loan, or to judge who should be imprisoned. Corporations are racing to automate whole industries, offer free services so they can sell off consumer data, and develop technologies that exacerbate gendered and racialized violence (e.g., facial recognition). These efforts are having devastating effects on minoritized populations—expanding corporate and government surveillance, concentrating wealth and power, exacerbating inequality, pillaging the planet, and fortifying mass incarceration.”
Surveillance; Data; Prediction; Modulation; Capitalism—ever looping…
And it is in this dynamic context that we find our contemporary version of so-called “Artificial Intelligence” (again, following Johannes Jaeger, it is better termed “Algorithmic Mimicry” or AM). AI/AM is a system (an algorithm) that can handle massive amounts of unstructured inputs (big data) and independently learn (machine learning) how to determine patterns and future likelihoods in or because of that data. That it can work with dynamic and changing data independently is critical. Given that reality is open, and ever changing any system that could only work with pre-specified criteria would never be able to keep up.
The basic algorithmic process that is used is a nested and iterative pattern seeking logic that works from general to particular. This logic is called a “neural network,” as it is abstractly inspired in a loose way by how brain neurons interact.
“The contemporary neural network is built on a layered model of perception. At its most fundamental level are processing elements called input neurons, which work just as the brain’s neurons do, firing in response to a specific stimulus. In machine-vision applications, for example, these are tasked with detecting features like edges and corners, and are therefore responsible for the crudest binary figure-ground calculation: is there something in the image, or not?
If the answer is “yes,” these primitives will be passed on to a higher layer of neurons responsible for integrating them into coherent features. As neurons in each successive layer fire, a picture of the world is filled in, at first with low conceptual resolution (“ this is a line,” “this line is an edge”), then with increasing specificity (“ this is a shadow,” “this is a person standing in shadow”). And then an accumulation of finer and finer detail until the criteria for top-level recognition are triggered, and an output neuron associated with the appropriate label fires: this is Ricky standing in shadow. The algorithm has learned to recognize the subject of the present image by attending to statistical regularities among the thousands or millions of such images it was trained on. And so it will be for each of the higher-level objects a neural network can be trained to recognize: they must be built from the bottom up, in a cascade of neural firings. What gives the neural network its flexibility, and allows for it to be trained, is that the connection between any two neurons has a strength, a numerical weighting; this value can be modulated at any time by whoever happens to be training the algorithm. The process of training involves manipulating these weights to reinforce the specific neural pathways responsible for a successful recognition, while suppressing the activation of those that result in incorrect interpretations of an image. Over thousands of iterations, as the weightings between layers are refined, a neural network will learn how to recognize complex features from data that is not merely unstructured, but very often noisy and wildly chaotic, in the manner of the world we occupy and recognize as our own.”
(Adam Greenfield, Radical Technologies: The Design of Everyday Life)
In practice the training involved is hugely labor intensive, highly predatory on existing materials (which has led to many ongoing lawsuits including with the NYTimes and various major authors, artists and other producers), and requires massive amounts of energy and computing power.
There has already been considerable criticism of the limits of Large Language Models ability to generate novelty. Such an outcome is to be expected — LLM’s are meant to simulate human conversation by delivering the statistically most likely next word, phrase or text. This is the mimicry that Johannes Jaeger reflects upon:
“These systems do not “conceptualize,” “conceive,” or “create.” They “count,” “calculate,” and “compute.” The term “artificial intelligence” itself is a gross misnomer: the work in this field, as it currently stands, has nothing to do with natural intelligence. I suggest calling it algorithmic mimicry instead, which makes its nature explicit and helps to avoid category errors such as the ones described above. Or, when mimicry is well done and useful to human agents (which it often is), we could call it IA: intelligence augmentation. Algorithms are tools, admittedly more complex than a hammer, but still tools that we can use, if we choose to, to boost our own cognitive capabilities. An AI “agent” is never an agent on its own.
My argument supports those who see the dangers of AI not in the possibility of Artificial General Intelligence (which is not a real possibility right now), but in dangerous and deceptive applications of narrow algorithmic mimicry. LLMs are and remain in this narrow category, no matter how astonishing their feats of imitation. The problem of alignment is not one of adjusting ourselves to the presence of superior entities. Quite the contrary, we must recognize these algorithms for what they are: powerful tools to be adjusted to our human needs (see, for example, Werthner et al. (2022)). This is an urgent and tremendous practical problem that will have to be solved using societal and political means. It is also, to a large degree, a problem of design: algorithms must be clearly distinguishable from real agents because the two are not at all similar in kind.”
While the existing reality and applications of AI do fit this narrow logic of mimicry and have a profound tendency towards the generic, as a novel tool, we are, as ever, making the new do the work of the old. But there is no reason one could not also use these tools in non-mimetic ways: As systems that generate patterns from open and chaotic sets of data, they already allow us (at least in theory) to explore ways of organizing data that go beyond human habitual patterns and entrenched logics.
Creativity—the process of producing the new, especially the qualitatively new—first requires an understanding of the old:
What patterns exist?
And then we can experimentally ask:
What other patterns might exist?
What other questions can be asked?
What other ways of being alive could be supported?
Could systems based on neural networks such as AI not be used in this manner?
To answer this question, let’s pivot from exclusively considering the logic of AI, and expand the scope of our inquiry to consider algorithmic creativity in general.
Creativity, it must be remembered, is not an exclusively human domain. And human creative engagements are equally not exclusively human. From the Big Bang, to the dynamics of forming the earth's tectonic crust, to the evolution of bird flight – creativity is everywhere at all times.
These ubiquitous environmental forms of creativity are processes that require no ideation, planning, or thought whatsoever.
And our human practices of creativity are ones that “surf” the far more general sea of ubiquitous environmental creativity. And as such, our human creative practices are “more-than-human” – they are not coming from our heads and being imposed on a passive world. Rather, they co-emerge in a complex, active push and pull with multiple forces and systems.
These more-than-human processes and our hybridization with them leads to our creativity having a “machinic” character. This is a term we borrow from Gilles Deleuze to suggest the non-intentional character of conditions that, if met, set in motion a process.
We at Emergent Futures Lab have been exploring these spaces for activating machinic processes of creativity with clients via the development of ecosystems and processes that foster the spontaneous emergence of novelty. These practices often take advantage of algorithms and ecological systematization.
Such approaches to creativity are ultimately not that novel, and algorithmic approaches to a more-than-human (and especially a more-than-individual form of creativity) have, in various forms, a long history dating back in the arts well into the 1800’s. And to contrast with the impoverished and generic quality of AI’s algorithms, it is worth looking at the traditions of experimental language arts and their inherent processual and algorithmic qualities. From the work of Gertrude Stein to Kurt Schwitters to the astonishing Algorithmic richness of the entire Oulipo movement.
Considering creativity as a process, one that could be aided by loose forms of systematization, has taken many forms. Perhaps the most pertinent to questions of AI are evolutionary algorithms and evolutionary computation.
Evolutionary algorithms are one approach in a larger field of methods that borrow from evolutionary biology to generate novelty.
Here, the goal is to evolve novelty in a manner that draws from evolutionary theory (especially the use of diversity and selection found in tree process models). The range of uses and outcomes is vast, from urban planning to product design to drug development to music.
And now to come back to our previous question for AI computational techniques: Could systems based on neural networks, such as AI, not be used to generate novel patterns that exceed existing human patterns?
What evolutionary computing offers is a way to step outside of existing human patterns of thinking and imagining. A wonderful example of one use of this logic is the development of a radar antenna by NASA. Needing a very fine tuned antenna that exceeded the abilities of human engineers to design, they turned to an evolutionary algorithmic process that generated a unique form that exceeded the standard visual logics that a human designer might consider:
While evolutionary computing offers a very powerful set of computing tools for innovation (and there is a huge variety of these tools), there is a limit to their utility in regards to creativity. The fundamental limit to these approaches in relation to novelty is that they rely on computing and computing exists in a clear divide between software and hardware. Which is to say, the novelty remains in the software but has no capacity to change the fundamental structures of its environmental givens (hardware). Evolutionary computing ignores the development of Niche Construction and Material Agency as the two additional critical pillars of contemporary evolutionary theory, which is eco-evo-devo (ecological, evolutionary, and material development).
But these limits open up perhaps the most interesting loosely algorithmic spaces of machinic creative processes to explore, and these are analog and take full advantage of the relational agency of things. We have already written on these at some length. In Newsletter Volume 71: Chat GPT and the Blind Adventures of the Analog, we detail an alternative analog process of artificial evolution for creative outcomes. This is really worth reading carefully, as it offers insights into a strong alternative to the computational approach.
We followed this up in Volume 72: Problems, Emergence, Worlds, Chat GPT, and Creativity with a further exploration of the importance of developing a method that invents problems.
All this said, there is much to be gained from exploring Machinic, and, loosely algorithmic processes of creativity, especially in how they exceed, refuse and go in other ways than our all to human patterns. This is something we can attest to based upon our own work with clients over the years—novelty is machinic.
And we can see how these processes will use AI and big data—but not for its “intelligence"—but rather because we can intentionally hack how such systems can spontaneously learn patterns that go beyond our generic assumptions.
As we mentioned last week, here is something worth adding to your calendars – the 2024 Varela International Symposium: Sentience and Intelligence: AI, the More-Than-Human, and Us is happening in late May. As one of the key yearly symposiums on enactive practices, it is always interesting. This year, it is of special interest to us as they are turning part of their attention to the question of AI. The enactive approach to innovation has much to say on this as it strongly critiques and refuses the computational approach to sentience and cognition—without ruling out the sentience of artificial beings, etc.
The dates are May 24-26, and it is both virtual and in person by donation. Take a look and we hope to see you there.
Till then, stay with your artifice in all of its creative potential!
Keep Your Difference Alive!
Jason and Iain
Emergent Futures Lab
+++
📈 P.S.: If this newsletter adds value to your work, would you take a moment to write some kind words?
❤️ P.P.S.: And / or share it with your network!
🏞 P.P.P.S.: This week's drawings in Hi-Resolution
📚 P.P.P.P.S.: Go deeper - Check out our book which is getting great feedback like this: