MasterMinds
Courses
Resources
Newsletter
Welcome to Emerging Futures - -Volume 141! Environments, The Extended Self, and AI...
Good Morning Clouds of Self-Reproducing Data,
This week, let's dive right in. First, we made a playlist to run simultaneously as you read this, and second, just for some meandering fun, we begin with a run-on sentence worthy of Thomas Bernhart’s autobiography:
Charles Goodwin (1943-2018), who was by profession a professor of Communication Studies – but in reality was someone who radically disrupted all of the neat specializations in the world of communications, linguistics, and semiotics to look at how a fundamental aspect of the ongoing activity of being human and using tools is what he called “co-operative action” (not to be mistaken for cooperation or collaboration – but something far more ubiquitous and fundamental) – anyway he credits much of his insight to working with his father who had a stroke after which he could only communicate with three words: “yes”, “no” and “and”. Goodwin’s starting point in his late magnum opus Co-Operative Action: Learning in Doing – Social, Cognitive and Computational Perspectives is his first hand observation that “despite this he was a very good conversationalist”.
How could this be? What can you do with just three words? What could his father draw upon to be a good conversationalist?
First, as Goodwin goes on to explain (this video of his lecture is worth watching in full), he had a highly developed sense of language (before he lost most of his ability to speak), as well as the ability to tonally inflect these three words, additionally, he could make other non-word sounds (what Goodwin terms “prosody”); the richness of this makes one resituate the experimental work of Kurt Schwitters, Joan La Barbara, the scatting of Ella Fitgerald, and the extended vocalizing of Tanya Tagaq. Plus his embodied gestural capacities, and critically, he had the distributed resources of others and his environment (hence Goodwin’s concept of “co-operative action," which we will say more about in a moment).
Our individual success at the action of communicating with the tool of speech is not reducible to our vocabulary or our personal speech. Speech is not a discreet activity but part of one's ability to do things as a competent inhabitant of a world.
There are a thousand ways to say yes or no or and – and given the importance of context in which they are said, their range of agency is even wider. There are then any number of non-word sounds one can make that qualitatively inflect the flow of the conversation; we do this all the time (and with more than just sounds—our body position, gestures, and expressions). But most interestingly and importantly, Goodwin makes the point that basic speech by one person is always distributed (co-operative).
What does this mean?
Godwin’s father, "Chil,” would prompt others to query him through some context relevant action that would lead to a question – for example, “What would you like for breakfast?” And from this, they could build an interwoven action of co-operative meaning making via a highly sophisticated dialog that was astonishingly rich and gave each participant a robust agency to do things and act as competent inhabitants in a meaningful community. (It is really worth watching the video; if you wish to just jump to an example of this interaction, skip forward to minute 18).
What is remarkable about what Goodwin is up to is that he is shifting what we focus on in what Chil is doing (and by extension, the rest of us) from individualized “speaking” and “communicating” to how individuals become and are “competent members with skills and knowledge required to see and act in the world in just the ways that make possible the ongoing accomplishment of the activities of that community.” And this activity is always co-operative action. Nothing is done purely alone or just with the resources that one individual possesses as a body.
This is as good a time as any to define what Goodwin means by “co-operative action.” First, the hyphen is there to distinguish this from the forms of cooperation that a sociologist might study, e.g., how people work together towards some mutually beneficial end; this is not what is meant by co-operative action. Rather, co-operative action is intended to cover all human actions, including violent disagreements. The hyphen additionally allows a focus on the fact that human action is an operation. This operation is one of decomposition and transformative reuse. Goodwin quotes Merleau-Ponty’s observation that “history is neither a perpetual novelty, nor a perpetual repetition, but the unique movement which creates stable forms and breaks them up.” All our actions work with existinthings, which form “laminations” of a meaning rich environmental substrate—or what he terms “sedimented landscapes for knowledge and action.” We take these givens, decompose them into components and transformatively reuse them.
With speech, we begin with a substrate of existing phrases and a meaning rich environmental context; most often, we begin with the words already being employed in the conversation and operate on these. Breaking them apart and recomposing them in relevant ways. This brings us back to AI and what we have been looking at in the last few newsletters. Here, there is little difference between us and a “stochastic parrot," or AI and its large language models. “New [linguistic] structures for accomplishment of consequential action are progressively created by performing systematic transformative operations on what already exists.” None of us speak in a truly unique manner – we are working with existing material, (which is a big part of why AI can get speech and text so right.) BUT – equally importantly – none of us is making meaning or knowledge alone. Meaning making and the production of new knowledge are inherently distributed actions, requiring existing meaningful landscapes of sedimented practices and others.
“Co-operative action provides an alternative, quite general mechanism, for both accumulation and incremental change, one lodged within the interstices of mundane action itself. Not only does subsequent action include within its own organization materials created by predecessors, but it also transforms those materials in the ways required for adaptation to current circumstances. This is made possible by the ways in which participants not only attend to, put actively participate in, the detailed organization of each other’s action as it unfolds through time.” (Goodwin)
If we consider Chil in isolation, he is not a competent speaker, and has a grossly deficient vocabulary. His stroke has left him seemingly profoundly handicapped, with access to only three words. Because of this, if considered only as a solitary individual, he cannot effectively communicate. As an isolated individual, he appears as someone who just repeats, “no, no, yes, no, no, and no no yes…” (Here is a second interview with Goodwin talking with an advocate for those with Aphasia that gets at some of the ramifications for how such a person might be diagnosed and treated if taken out of their effective ecosystem for action—something that could be true of any of us.)
Yet as the video shows and Goodwin demonstrates, Chil was anything but impoverished and handicapped – he was an astonishingly effective and even captivating communicator and storyteller because he built upon the distributed resources for action that were in dialogue with others. All action turn out to be co-operative. Different actors contribute distinct parts to the same action, and “relevant knowledge spans multiple actors”.
The fundamental mistake we make when we see Chil as being profoundly handicapped is that we are not correctly drawing the boundary of his identity, (which must include his capacity to act.) In this case, we are drawing the boundary of the individual incorrectly at the edge of their skin—their body.
Where should we be drawing the boundary?
Well with Chil, it will include his family, long-term care givers and his environment. When we correctly understand the actual distributed ecological nature of being a competent active human being we can no longer consider the individual as meaningfully distinct from an ecology.
While Chil is a unique case, what is not unique but applies to all of us is that we too need to be considered as ecological beings. The meaningful boundaries of each of us are inherently distributed and interwoven into and through a landscape of tools, practices, environments, and, critically, other humans. We are not beings who's being ends at our skin intrinsically equipped only with internal resources, which are potentially supplemented or additionally augmented by some helpful outside resources. As Chil shows us so clearly, we are inherently embodied, embedded, and extended beings always involved in the co-operative action of creating and maintaining a shared world.
Traditionally, we separate language use, tools and tool use, systems of social organization, built environments, systems of learning, institutions, social ties, others, and habits. Each is studied and supported by very different fields and experts. But this is a mistake. Rather, all of these phenomena are “different manifestations of the distinctive ways that human beings build co-operative, accumulative action in concert with each other.”
The goal of a conversation is not speech, nor is the goal simply communication, nor is speech best understood as a display of “intelligence”; rather, it is part of the co-operative action of ongoing worldmaking, making and inhabiting an inherently meaningful and inherently distributed co-operative world lived in ongoing activity.
How does this have anything to do with AI? I want to introduce how a friend and colleague, Curtis Michelson, uses AI. Last week we asked you how you used AI, and Curtis was one of many of you who responded (thank you!)
This is part of his much longer and more detailed response on how he uses AI on a daily basis:
“Objective Listener Agent:
I have taken GPT on long walks when I am thinking through an issue. I use the voice input version and I tell it upfront before I start "hey gpt, after each thing I say, reply with 'okay' ". This way I can just input my thoughts while I walk and it's just storing them in a context window. I get back to the office, and then have it replay back my thoughts on the topic, and I give it several twists, and it's uncanny how it hears between certain threads and offers perspective
Interlocutor Agent:
A more strong version of the above is asking the AI to play the role of Socrates and really rigorously challenge my thinking. This is usually best done when I give it a 'method' of doing so. If I say, "in the way Socrates would challenge me", it's not bad at the socratic method. It will push my thinking a bit. But I have learned that to supercharge the AI, I'll prompt it with a more specific mental hacking method, say, "walk me through my topic using DeBono's Six Hats". Thenit will make sure I ask questions from the Green, the Red, the Black, the White, the Blue and the Yellow perspectives. You can do the same with so many other tools that it already distilled into its neurons, like SCAMPER, or Enneagram Personality Styles, and so on. "Hey, another potential way to 'interact with your book is, "push me to think like Jason and Iain, ask me questions the way they would".
Personalized Dialog, it knows Curtis
-what I've done with these tools is pre-load them with 'me', as in, at the settings level, I've told them some of the axes I want to grind. They know I'm writing a book with Daniela and what the topics are, that I'm an innovation coach and it knows my core hypotheses on what inhibits innovation and so on. This must tilt the results quite a bit, though I have not done side by side comparison yet between the personalized and the vanilla.
Also for giggles, I commanded it to refer me using my secret codename "Sir Serendip" and to sprinkle its responses with the phrase "I might be hallucinating but..." This really makes me smile when it comes back with all its verbosity. I kind of enjoy taking its hubris down a notch by forcing it to admit, it just makes shit up. ;)”
Our hope is that you can immediately draw powerful parallels to the story of Chil.
These uses of AI tools are not examples of some external “intelligent” being that is having an independent dialog with Curtis. And even if they were (like the individuals who help Chil carry on a conversation), they are not some thing/being that is corrupting or undermining the “authentic” Curtis.
Curtis is using these tools embedded within an environment and context as part of his inherently intra-woven and intra-dependent mode of meaning making, in a similar manner to Chil.
From this perspective, the concern in regards to AI cannot be “the machines in gaining intelligence are replacing us” or “these non-intelligent machines are dumbing us down,” for both of these responses rest upon a false understanding of what it is to be alive, human and intelligent. AI is doing many other things and it is certainly changing how we inhabit each others actions. But it is not a question of replacement or denaturing. AI needs to be considered from within our fully intra-twined embodied, embedded, and extended lives that weave into and through each other.
The question is an experimental one – given that everything is dangerous, how can we invent new practices of inherently collective worldmaking that are liberatory?
We are crossing many qualitative thresholds today via our creative co-operative actions, and this question cannot be posed as if the context does not matter; context is, as Goodwin makes clear, a fundamental determining aspect of everything. We have much to be concerned about in regards to our ongoing co-constructed environment; we are deep into a profound climate crisis and one of great inequalities. AI is being activated by states and massive corporations to surveil and control in ever more draconian ways (here is an important story about how Amazon AI Surveillance is being used to suppress workers).
But – AI is not an alien force undermining our individual autonomy, dumbing us down, or supplanting "us.” It is a very human part of how we are intra-woven and intra-dependent with others—our fellow beings, our environments, and our tools.
Our goal this week is not to end with a platitude about how AI “can be used for good or evil – it is up to us.” As if anything were neutral. Such a takeaway would be to miss the very logic of “co-operative action” and how we are trying to situate the question(s) of AI.
The propensities of the systems are emerging. We are purturbating and stabilizing these into emergent configurations, and, as Deleuze put it: “it is not a question of worrying or hoping for the best, but finding new weapons.” We need to “stay with the trouble,” as Donna Haraway puts it. And “staying with the trouble” requires a deeply engaged, collective, and creative set of experimental actions.
What might this mean? We will leave you with one suggestion for an enjoyable summer read: A Half-Built Garden by Ruthanna Emreys. This work is a speculative meditation on how co-operative action can entangle with everything from AI to new forms of technology to political systems to find a messy way to make more libratory worlds in times of collapse.
Well, that’s it from us for the week. Next week, we will shift from AI to one of our next two innovation topics: constraints and worldmaking (which we promise will be extensions and continuations of what we are discussing in these last few newsletters on AI). A big thanks to everyone who reached out via email and conversation this week; it was very helpful and we look forward to further conversations!
(A final note: Today the Varela Symposium begins).
ill next week,
Keep Your Difference Alive!
Jason and Iain
Emergent Futures Lab
+++
📈 P.S.: If this newsletter adds value to your work, would you take a moment to write some kind words?
❤️ P.P.S.: And / or share it with your network!
🏞 P.P.P.S.: This week's drawings in Hi-Resolution
📚 P.P.P.P.S.: Go deeper - Check out our book which is getting great feedback like this: