Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

The Enactive Approach

In The Embodied Mind, we presented the idea of cognition as enaction as an alternative to the view of cognition as representation. By “representation” we meant a structure inside the cognitive system that has meaning by virtue of its corresponding to objects, properties, or states of affairs in the outside world, independent of the system. By “enaction” we meant the ongoing process of being structurally and dynamically coupled to the environment through sensorimotor activity. Enaction brings forth an agent-dependent world of relevance rather than representing an agent-independent world. We called the investigation of cognition as enaction the “enactive approach.”

In formulating the enactive approach, we drew on multiple sources—the theory of living organisms as autonomous systems that bring forth their own cognitive domains; newly emerging work on embodied cognition (how sensorimotor interactions with the world shape cognition); Merleau-Ponty’s phenomenology of the lived body; and the Buddhist philosophical idea of dependent origination, specifically the Madhyamaka version, according to which cognition and its objects are mutually dependent.

Since the The Embodied Mind was first published, the enactive approach has worked to establish itself as a wide-ranging research program, ranging from the study of the single cell and the origins of life, to perception, emotion, social cognition, and AI. The foundational concept of the enactive approach, and the one that ties together the investigations across all these domains, is the concept of autonomy.

Varela, in his 1979 book, Principles of Biological Autonomy, presented autonomy as a generalization of the concept of autopoiesis (cellular self-production). The concept of autopoiesis describes a peculiar aspect of the organization of living organisms, namely, that their ongoing processes of material and energetic exchanges with the world, and of internal transformation, relate to each other in such a way that the same organization is constantly regenerated by the activities of the processes themselves, despite whatever variations occur from case to case. Varela generalized this idea by defining an autonomous system as a network of processes that recursively depend on each other for their generation and realization as a network, and that constitute the system as a unity. He applied this idea to the nervous system and the immune system, and he hinted at its application to other domains, such as communication networks and conversations.

Here’s how we introduced this idea in The Embodied Mind, just after quoting a passage from Marvin Minsky’s, The Society of Mind, in which he writes, “The principal activities of brains are making changes in themselves” (p. 288):

“Minsky does not say that the principal activity of brains is to represent the external world; he says it is to make continuous self-modifications. What has happened to the notion of representation? In fact, an important and pervasive shift is beginning to take place in cognitive science under the influence of its own research. This shift requires that we move away from the idea of the world as independent and extrinsic to the idea of a world as inseparable from the structure of these processes of self-modification. This change in stance… reflects the necessity of understanding cognitive systems not on the basis of their input and output relationships but by their operational closure. A system that has operational closure is one in which the results of its processes are those processes themselves. The notion of operational closure is thus a way of specifying classes of processes that, in their very operation, turn back upon themselves to form autonomous networks… The key point is that such systems do not operate by representation. Instead of representing an independent world, they enact a world as a domain of distinctions that is inseparable from the structure embodied by the cognitive system” (pp. 139-40).

In recent work, Ezequiel Di Paolo and I define an autonomous system as an operationally closed and precarious system. We can use this Figure to depict the basic idea (what follows borrows from this book chapter with Ezequiel).

The circles represent processes at some spatiotemporal scale being observed by the scientist. Whenever an enabling relation is established, the scientist draws an arrow going from the process that is perturbed and the process that stops or disappears as a consequence. An arrow going from process A to process B indicates that A is an enabling condition for B to occur. Of course, there may be several enabling conditions. We don’t assume that the scientist is mapping all of them, only those that she finds relevant or can uncover with her methods.

As the mapping of enabling conditions proceeds, the scientist makes an interesting observation. There seems to be a set of processes that relate to each other with a special topological property. These processes are marked in black in the figure. If we look at any process in black, we observe that it has some enabling arrows arriving at it that originate in other processes in black, and moreover, that it has some enabling arrows coming out of it that end up also in other processes in black. When this condition is met, the black processes form a network of enabling relations; this network property is what we mean by operational closure.

Notice that this form of “closure” doesn’t imply the independence of the network from other processes that aren’t part of it. First, there may be enabling dependencies on external processes that aren’t themselves enabled by the network; for example, plants can photosynthesize only in the presence of sunlight (an enabling condition), but the sun’s existence is independent of plant life on earth. Similarly, there may be processes that are made possible only by the activity of the network but that do not themselves “feed back” any enabling relations toward the network itself. Second, the arrows don’t describe any possible form of coupling between processes, but rather only a link of enabling dependence. Other links may exist too, such as interactions that have only contextual effects. In short, an Operationally Closed system shouldn’t be conceived as isolated from dependencies or from interactions.

Notice too that although the choice of processes under study is more or less arbitrary and subject to the observer’s history, goals, tools, and methods, the topological property unraveled isn’t arbitrary. The Operationally Closed Network could be larger than originally thought, as new relations of enabling dependencies are discovered. But it’s already an operationally closed network by itself and this fact can’t be changed short of its inner enabling conditions changing, that is, short of some of its inner processes stopping.

A living cell is an example of an operationally closed network. The closed dependencies between constituent chemical and physical processes in the cell are very complex, but it’s relatively easy to see some of them. For example, the spatial enclosure provided by a semi-permeable cell membrane is an enabling condition for certain autocatalytic reactions to occur in the cell’s interior, otherwise the catalysts would diffuse in space and the reactions would occur at a much different rate or not at all. Hence there is an enabling arrow going from the spatial configuration of the membrane to the metabolic reactions. But the membrane containment isn’t a given; the membrane is also a precarious process that depends, among other things, on the repair components that are generated by the cell’s metabolism. So there is an enabling arrow coming back from metabolic reactions to the membrane. Hence we have already identified an operationally closed loop between these cellular processes. If the scientist chose not to observe the membrane in relation to the metabolic reactions, she would probably miss the topological relation between them. Operational closure—cellular life in this case—can be missed if we choose to put the focus of observation elsewhere, but it isn’t an arbitrary property if we observe it at the right level.

Given how we’ve defined operational closure, various trivial examples of such closure may exist. For example, in cellular automata, the regeneration of an equilibrium state in each cell mutually depends on the equilibrium state in others, making the dependencies into a closed network.

We need an additional condition to make operational closure non-trivial, and this condition is that of precariousness. Of course, all material processes are precarious if we wait long enough. In the current context, however, what we mean by “precariousness” is the following condition: In the absence of the enabling relations established by the operationally closed network, a process belonging to the network will stop or run down. A precarious process is such that, whatever the complex configuration of enabling conditions, if the dependencies on the operationally closed network are removed, the process necessarily stops. In other words, it’s not possible for a precarious process in an operationally closed network to exist on its own in the circumstances created by the absence of the network.

A precarious, operationally closed system is literally self-enabling, and thus it sustains itself in time partially due to the activity of its own constituent processes. Moreover, because these processes are precarious, the system is always decaying. The “natural tendency” for each constituent process is to stop, a fate the activity of the other processes prevents. The network is constructed on a double negation. The impermanence of each individual process tends to affect the network negatively if sustained unchecked for a sufficient time. It’s only the effect of other processes that curb these negative tendencies. This dynamic contrasts with the way we typically conceive of organic processes as contributing positively to sustaining life; if any of these processes were to run unchecked, it would kill the organism. Thus, a precarious, operationally closed system is inherently restless, and in order to sustain its intrinsic tendencies towards internal imbalance, it requires energy, matter, and relations with the outside world. Hence, the system is not only self-enabling, but also shows spontaneity in its interactions due to a constitutive need to constantly “buy time” against the negative tendencies of its own parts.

The simultaneous requirement of operational closure and precariousness are the defining properties of autonomy for the enactive approach. It’s this concept that enables us to give a principled account of how the living body is self-individuating—how it generates and maintains itself through constant structural and functional change, and thereby enacts its world. When we claim that cognition depends constitutively on the body, it’s the body understood as an autonomous (self-individuating) system that we mean. It’s this emphasis on autonomy that differentiates the enactive approach from other approaches to embodied cognition. It’s also what differentiates the enactive approach in our sense from the approaches of other philosophers who use the term “enactive” but without grounding it on the theory of autonomous systems.

— Featured Image Credit for this post: Tobias Gremmler.



This post first appeared on The Brains Blog | Since 2005, A Leading Forum For, please read the originial post: here

Share the post

The Enactive Approach

×

Subscribe to The Brains Blog | Since 2005, A Leading Forum For

Get updates delivered right to your inbox!

Thank you for your subscription

×