Welcome to Emerging Futures - Volume 71: Chat GPT and the Blind Adventures of the Analog
Good Morning more-than-knowingly creative beings,
It is a beautiful morning of intense darkness, and quiet. The cold is just right to have the doors open.
For us it now completely feels like the winter break is over and things are full stream ahead. We have quite a few interesting projects that we are now entering into. We are about to embark on an international green innovation intensive, as well as speaking and doing workshops at various conferences and events. One that is of great interest to us is in a few weeks - we will be speaking at the National Academy of Sciences in Washington as part of a daylong series of high level discussions on Research and Innovation and so we have been preparing for this by considering the state of and level of innovation in the vast ongoing stream of the sciences.
In this regard, a very interesting paper was recently published in Nature making the argument that fundamental research has become “less disruptive over time”:
“We find that papers and patents are increasingly less likely to break with the past in ways that push science and technology in new directions. This pattern holds universally across fields and is robust across multiple different citation- and text-based metrics. Subsequently, we link this decline in disruptiveness to a narrowing in the use of previous knowledge, allowing us to reconcile the patterns we observe with the ‘shoulders of giants’ view. We find that the observed declines are unlikely to be driven by changes in the quality of published science, citation practices or field-specific factors. Overall, our results suggest that slowing rates of disruption may reflect a fundamental shift in the nature of science and technology.”
There is quite a bit of interesting research work in this area of trying to measure the frequency and type of innovation that is happening.
While we don’t have much that might be of interest to say right now, we will be coming back to this as we concretize some thoughts on the matters. In the meantime, we are interested in your thoughts on the state of innovation in the sciences – if you have something to share, reply to this email – we would love to chat.
Over the last couple of months generative systems such as ChatGPT have gotten considerable attention. There is a huge amount of really interesting experiments, but also considerable consternation about how new and troubling this technology is. We have little to add to either of these endeavors. In regards to the former, we are excited to experiment further but have not done nearly enough, and in regards to the latter, much of it comes from a place of mistaken human exceptionalism which we neither share nor believe has much to do with creative processes.
ChatGPT is a type of “stochastic parrot” – blind and without concern for meaning, and it beautifully works the probabilities to “parrot” out answers. This makes it a type of change-in-degree engine – it produces powerful synthetic variations of what exists. Such approaches inherently have difficulty with producing qualitatively novel outcomes of the kind we would recognize as radically “original”.
But, in some ways that is not a big deal – this is also what most of our thoughts and even actions are – we are combiners and synthesizers of existing phrases, tropes, styles, concepts, etc. that are not that original or that different in outcome from ChatGPT. Hence our answers converge – which makes these systems of potentially great value.
We too, are fully immersed in ecosystems for the production of change-in-degree – development, and variation – which in itself is not necessarily a bad thing…
While these particular language and image generating systems are very new. Similar generative systems have actually been with us for quite some time – especially in the arts. The literary work of the OULiPo movement is a great example of this. Some of the most innovative and interesting works from Alison Knowles to John Cage to Zaha Hadid to Erin Manning have been made with generative systems that go beyond human ideation and intention.
Looking at this longer history of the creative use of generative systems, there is a bigger point that often gets lost in the discussion: creativity has never been reducible to the human individual and their internal states of being. It has always been a more-than-human or even wholly non-human generative process. And that these processes have patterns to them (such as exaptive processes in evolution) which are quite stable and which can be loosely reproduced.
The “blind” or “dumb” nature of such systems is quite important for innovation. Deleuze talks about the “blind probe head” of creative processes allowing for radical exploration and enactment of emergent novel spaces of the possible. Blindness is the radical not knowing that allows for forms of exploration that do not converge on already established patterns.
There is something really interesting in how systems like ChatGPT are “blind” in the sense they have no idea of what they are making at the level of meaning. The system is just considering the statistical likelihood of what the next word is most often in a certain context (based on a massive but limited database of examples). And while predicting next words is not new, (see Grammarly or your favorite ineffective customer support chatbots) what is new is ChatGPT’s persistent memory state. ChatGPT can maintain a “memory” of the sessions inputs and outputs to build upon and evolve the next statistically relevant words.
Digital systems provoke a number of questions about how open they can in actuality be towards novelty. Conventional digital creative methods like ChatGPT, because of their reliance on conceptualization, and abstractions within closed predictive parameters in reality work with a highly constrained design space that is neither blind enough nor open enough (this is not a criticism but an observation). Such a design space seems ideally suited for incremental forms of innovation. Perhaps the irony might be that in regards to creativity, ChatGPT is not radically “anti-human” or blind enough – it does not have a way to get out of existing historical patterns to radically explore an open space of possible meaning making.
Obviously we are only at the very beginning of what might emerge from all of these developments as the panicked Google founders have been rushed back to the office. Our interest is not joining the prognosticators but in exploring questions around creativity and innovation with generative systems:
Is it possible to set up blind systems that explore far wider spaces of unintended possibility for emergent qualitative novelty? What would the conditions of such experiments be?
To ask this question we want to go back to some writing we did exploring this topic last year.
To see some possibilities in an alternative approach to exploring the question of how to push semi-autonomous generative systems towards more radical, open and novel forms of exploration it might be useful to pause and back up a bit.
Digital software systems have strongly predefined limits to their openness – they simply cannot escape the structural logic of how they were programmed. They cannot directly reach outside of the space of the program to remake the larger space of possibility. In this they are limited by the very symbol system they operate within.
In this approach of an ultimately closed symbol system we can see the playing out of certain key aspects of the western tradition in regards to creativity. This is a model where the abstract ideas and immaterial conceptualizations take precedence over the open agency of distributed material systems.
The problem with this assumption/approach is that it makes radical creativity impossible – believing that creativity happens in ideas, abstractions, and symbol systems stops the new from emerging. To engage with creative processes effectively we need to engage prior to what can be codified. The emergence of the new precedes and exceeds what can be represented.
What would the emergence of the new look like if it was not placed in the context of ideas, concepts, symbols? What would it look like to work with the world as having a real open and independent agency?What would creativity look like if it did not begin within an abstract and closed symbol system?
In a very real sense the answer to this is any actually qualitatively new emergence – whether it be the emergence of flight by humans or non-humans or any of the other countless examples. But lets focus on deliberative generative examples — where the researchers deliberately built processes for innovation that refuse human ideation and conceptualization.
There is a great example of this in the research practices of Adrian Thompson, who is a research fellow at the Center for the Study of Evolution at the University of Sussex where he leads a research group focused on: What can evolution do that human designers can’t? – And this is precisely the question that we wish to explore here.
Adrian puts it this way, “Artificial evolution, working by the systematic accumulation of blind variations, can produce designs that boggle the mind…”
Adrian argues that conventional creative methods can, because of their reliance on conceptualization, and abstractions, only work with a highly constrained design space. (And this is perhaps one of the limits of systems like ChatGPT). We do what we know — but could we set up a system to explore the vast space of possibility beyond what we can know, represent and symbolize? Underlying this method is a dynamic systems approach to the spontaneous emergence of novelty that cannot be traced to any one source. (Here is a link to the article where the researcher Adrian Thompson writes up his findings).
One key experiment that he carried out using techniques from evolution was to come up with new electrical circuits that could distinguish differing tones. (It is an example that we first learned about from the insightful book on artificial life, art, emergence and creative processes: Metacreation – a book still totally relevant to these discussions around digitality and radical creativity).
The goal was to develop an unconstrained approach to evolutionary creativity and test it with a simple electronic circuit. To do this blocking was used — to limit the effect of standard key components. This would force the evolutionary exploration into novelty far from standard pathways.
The actual experiment developed a system based on evolutionary principles to breed new electronic circuits to sense tonal differences. Each circuit is actually physically built, tested and scored as to how well it performs a task of discriminating between tones. The “fittest” circuits are physically “mated and mutated to generate the next generation”.
It is a highly iterative method where unintended material capacities of the system (exaptations) emerge in the action of the circuit. This method stands in strong contrast to conventional circuit design which works via software abstractions (where the complex physical behavior of actual parts are reduced to a binary diagram of logical operations (a symbolic representation)). Why does this matter? The abstract ideational methodology of software based research and testing “precludes the nonlinear complexities of feedback loops and the complex dynamics of the physical medium itself.”
This point is critical, matter has open relational possibilities which cannot be pre-specified. This relational radical openness is the ultimate well of qualitative difference.
Here is how Adrian reports on the actual experiment:
“For the first few hundred generations the best circuits simply copied the input to the output combined with various high frequency oscillatory components. By generation 650 progress had been made.”
“The entire experiment took 2-3 weeks, this time was entirely dominated by the five seconds taken to evaluate each individual… If evolution is to be free to exploit all of the components' physical properties then fitness evaluations must take place at the real timescales of the task to be performed…”
As the experiment was well underway things began to get interesting:
“...it is apparent from the oscilloscope photographs that evolution explored beyond the scope of conventional design. For instance the waveforms at generation would seem absurd to an electronics designer of either digital or analogue schools.”
At the end of the experiment, Adrian was confronted by a truly alien circuitry. Here is what the final circuit looked like:
What is really interesting is where and how the novelty emerged. The circuit was carefully examined and it was discovered that most of the components were not connected and removing them had little to no impact on the functioning. But there were unconnected components that could not be removed without impacting the performance (see grey squares in diagram below).
“When the final evolved circuit was examined, it was apparent that it functioned in an entirely unfamiliar way. After initial analysis, only sixteen of the one hundred cells in the programmable array were found to be involved in the circuit, and these units were connected in a tangled network. Further investigation delineated three interlinked feedback loops that appeared to make use of the miniscule timing delays to convert the incoming signals into a simple on/off response.”
The exact mechanisms involved finally defied explanation; the results could not be reproduced in simulation nor could the circuit be probed physically without disturbing its dynamics. Thompson and his colleagues described the circuit as “bizarre, mysterious and unconventional.”
Here we have a clear example of a process of creativity that both refuses and exceeds abstract symbol systems in its direct workings. The system, separate from human control and abstract symbol driven design, generated the novelty via a generative process of tapping into the open nature of material relations. And that this novelty was not something that could be explained or simulated — our go-to techniques of ideational abstraction (it could note be coded). The novelty was an emergent property of the physical system outside and beyond our capacities of predictive ideation. Whitelaw goes on to explain:
“It seems that the design made use of highly specific physical qualities of the chip on which it was evolved. It had also evolved to operate accurately at a particular temperature.”
The physical properties of the substrate and environment that it was taking advantage of were not intentionally designed for any of these purposes (and thus never considered as being conceptually relevant to circuit design previously). These properties were unintended affordances that were co-opted and shaped into relevance by the emerging system via the non-linear process of system causation (the process of exaptation).
For Thompson, his process of innovation continued by researching ways of working to stabilize the emergent and non-fully explainable phenomena.
What is most striking, as Whitelaw explains, is the process of “adaptive, nonhuman engineering lodged firmly in a material continuum rather than in the finite, discrete domain of computation”
This logic of creativity involves moving away from conceptual abstractions, representations, and closed symbol systems. Ideas, intentionality, imagination, formal structural models, and abstraction working as a bounded system of knowing.
In this example we see an astonishing process that is:
Can this process stand on its own? Is it somehow fully separate from ideation and abstraction? Of course not. What is important is that we can see another possibility of approaching creativity and innovation beyond models and processes that focus exclusively on what can be symbolized, and represented. This experiment gives us a concrete view into the workings of a more relational “worldly” approach to creativity — and hopefully by doing this gives us a powerful sense that we can effectively engage with creative processes differently.
Radical Creativity is not in our heads, or in our software far removed from worldly experimentation. It happens best when we believe in the world and join ongoing creative worldly processes. It happens best when we give “blind” agency to things, and environments. It happens best when we co-emerge with novelty. Which is not to say that software does not play an important role in all of this, or that generative systems like ChatGPT do not have a role.
Far from it – but, we need to believe in radical forms of possibility — that the new when it emerges will exceed what we know or even how we know…
While this example is quite a narrow one — focused on evolving a novel electronic circuit — it has implications that go far beyond the immediate scope of the experiment. The general processes utilized here are as crucial for working on the innovation of new circuits or new environmental change. It involves following the agency of things, and relations.
Too often we confuse things with what they are for us. Things not only exceed our abstractions — they have active powers. It is important that we feel this deeply — for to do so will change who we are and how we work with creative processes.
And this opens us up to the wondrous expansive creative questions:
“What can it do? — And what else can it do?” We need to genuinely ask this question — not what can something do for us, or how does it fit into what we know — but what can it do? This is where “blindness” comes in. What it can do will exceed our abstractions, knowledge and even imagination — as the example of the evolution of circuits shows us so clearly.
So, while ChatGPT and similar systems are getting all of the attention at this moment, our curiosity and interest is more with other generative systems that get outside of the digital symbolic space of software and push “blindness” in more radical directions.
We are really curious about your own experiments. Are you, and if so, how are you developing more than human materially engaged generative systems for qualitative novelty? We would be really excited to talk about what you are up to.
OK! We will leave things there. Here’s to a good weekend and a week of becoming ever other-than...
NOTE: No artificial intelligence was used to generate this newsletter – only distributed more-than-human intelligences participated freely in this work process.
Till Volume 72,
Jason and Iain
Emergent Futures Lab
We’re How You Innovate
---
🧨 P.S.: We facilitate workshops and the accolades are overwhelming.
❤️ P.P.S.: Love this newsletter? We'd be grateful if you heap a bit of praise in the comments
🏆: P.P.P.S: Find the newsletter valuable? Please share it with your network
🙈 P.P.P.P.S: Hit reply - feedback of any kind is welcome
🏞 P.P.P.P.P.S.: This week's drawings in Hi-Resolution
📚 P.P.P.P.P.P.S.: Go deeper - Check out our book which is getting great feedback like this: