Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Context-Form Considerations for Ecological Developmental Systems

Designing Artificial Organisms

I’m going to discuss Context-form relationships of organisms. Why? Well to glean insight into designing artificial organisms, of course.

Yes, artificial organisms. With synthetic minds.

Is this more AI stuff, you might ask? Sure it could be AI, but not “AI” as in ML (Machine Learning) as AI has come to be redefined in the past two decades. The stuff I’m talking about here could very useful for a cognitive architecture approach to AI or AGI. Is this useful for robotics? It could be. More immediately, robots could be useful to this type of research as could simulations of the “real world.”

The core concepts in this post are context and plasticity, which I think are key to cognition.

I don’t mean “context” in the linguistic way. I mean it in a more abstract way which can describe an environment—and maybe the history leading up to that point. But it could be even more abstract than a space-time environment. And this allows thinking of a particular creature—or its intelligence—as a “form” in that context and/or interfacing with that context. And that context presumably is also full of other creatures and is very dynamic.

Now, for “plasticity,” that could be neuroplasticity, like how the brain can adapt to injury. But instead let’s think of it more generally. And forget about brains. We can start with this definition1:

Cognitive plasticity has typically been defined in terms of the individual’s latent cognitive potential under specific contextual conditions. Specifically, plasticity has been defined in terms of the capacity to acquire cognitive skills.

A strange thing that I’ll also talk about is that perception of both the observer and the organisms under observation are probably nonveridical. Veridicality in cognitive science means2:

the extent to which a knowledge structure accurately reflects the information environment it represents…The value of a knowledge structure lies in its ability to simplify an environment, yet simplification increases the probability of a false characterisation and hence error…

…Of course, the notion of veridicality depends on the tacit view that knowledge structures exist. In direct perception and its modern successor, radical embodied cognitive science, knowledge structures vanish — and so too does the notion of veridicality!

If that didn’t make any sense, then think of “nonveridical” as meaning you are always hallucinating, but in a highly functional way.

This notion is tied into possibilities of enactivism and affordances. These issues, along with mental content or lack thereof, are all spaces to explore.

A Thought Experiment

Image by Pete Linforth from Pixabay

Imagine a simulated robot, Bot 1, that spends its life collecting virtual cubes. It loves cubes for reasons that are irrelevant to this thought experiment. It is attracted to them, picks them up and transports them back to its home. Cubes are never too far off and the cubes do not object to being picked up and moved about by the bot. Bot 1 fits reasonably well in its environment.

But then imagine we transport this bot to a totally different environment. In this other environment, let’s call it World 2, cubes explode when picked up, killing the bot who triggered the reflex. Spheres, on the other hand, are quite harmless and provide an equivalent function for the bots (whatever that function may be). Bot 1 lives for all of five minutes in World 2 before being blown up in its natural action of picking up cubes.

Now imagine we have Bot 2 who is like Bot 1 but with an additional infant stage. In this initial stage, it learns to prefer either cubes or spheres by trial and error. However, trial and error with deadly traps would result in a lot of infant fatalities. So, in addition to the mental infant stage, we also have to provide it with a special context. This context, World 2-A, is a less deadly version of the adult World 2. Cubes and spheres are smaller; cubes only cause a temporary shock instead of instant death. After a certain time period or achievement status, Bot 2 is allowed to enter the adult World 2, where it successfully avoids big cubes and picks up big spheres instead.

This little experiment highlights two issues which are at the core of making artificially intelligent entities:

  1. Context
  2. Plasticity

Context defines intelligence. Context is a necessary part of defining any form, including minds. Environment is what is often thought of as context. This relationship in biology has been called the “mutuality of animal and environment”3.

Context is not always environment, however, at least from a design stance. There are times when the environment itself could be a form to be modified by a researcher-design of artificial intelligence. Also, we should consider that adaptations are not just relative to the environment—they can also be relative to different outcomes in identical environments4. So the requirements imposed on the form beyond basic survival are as important as the context and cannot be automatically derived from the context.

“Plasticity” as I’m using it here is not limited to the brain or its analogy (if any) in artificial organisms. I say plasticity includes, but is not limited to, developmental processes, learning and morphological changes. Plasticity determines whether a mis-fitting form will self-modify or comply enough to fit well enough in a particular context.

It also indicates how a developmental program would most likely involve both organism and environmental mechanisms in harmony.

Tear Down the Walls

Image by Enrique from Pixabay

Our perception of the world is a construction—and it is not necessarily a truthful one5,6. Perception is “motocentric” and interactive, driven by evolution to support motor control that operates in space and time7. Formed by our ecological niches, perceptions are actually more likely to evolve as nonveridical than veridical8. We could think of perceptions as user interfaces to reality9; like computer user interfaces, reality interfaces are useful because they hide complexity and make tasks easier to accomplish.

Affordances, originally defined by psychologist James J. Gibson in the 1970s, are another relevant ecological perception notion3. An affordance is basically an interface that occurs between an animal and the world which allows the animal to perform an action. Animals, including humans, have evolved to develop natural perceptions to inform them about such affordances.

An interesting extension of Gibsonian affordances is that of “epistemic affordances,” which are varieties of information available to an organism in a particular circumstance10. Furthermore, humans might use action affordances in order to present themselves with previously-hidden epistemic affordances, thus removing uncertainties.

In a different method that intertwines intelligence and action, it has been shown that a robot can learn object categories via a series of various sensorimotor interactions (grasping, moving, shaking, flipping and dropping a block on/in the object)11.

An example that ties affordances into enactivism: humans don’t see flatness in isolation from seeing something that affords certain possibilities for sensorimotor interaction like climbing upon it12. As long as one does not assume that affordances are for truth instead of utility, then the concept is compatible with perception as an interface.

Perception as an interface may be unsettling, but at least we have a well-defined location for that interface—or do we? Are mental interfaces like this purely in the brain? Embodied cognition13, including enactivism14, would have us question these assumptions. Studying perception and all of cognition may require shifting the interface boundaries to other locations in mind-body-environment systems.

This does not make biologically-inspired artificial intelligence efforts easier. But it does point out some key aspects that AI researchers should observe and use for design, namely interfaces and context. I’ve already mentioned enactive cognition, but I’m not committing to that flavor of embodied cognition. Indeed, the approach put forth here is to tear down the walls and take a step back. One can then start exploring and start designing within the full spectrum of potentially mind-related spaces. For instance, by incorporating at least simulated situated embodied agents into a research program, one can study a landscape of embodied cognitive architectures ranging all the way to non-embodied cognition. Of course, an exploration may find that certain areas are not cognitive at all, thus giving support to the relevant philosophy camps. But erecting the walls ahead of time limits what can be explored.

Systems can be redefined depending on what point of view or design method one uses. The form in lifeforms, including the proposed artificial ones, has context just like the form-context relationship in design disciplines. This boundary between form and context is yet another wall that can be torn down; it should then be temporarily restored at different places.

Another dimension along which a sufficient research system could explore is mental content. Radical Enactive (or Embodied) Cognition (REC) rejects content from basic minds15. Basic could mean many things—at some point in development human minds become more than basic. But what other animals have content? What about behavioral (or Nouvelle AI) robots16,17,18? Are those similarly basic minds, just ready and waiting to be upgraded to more advanced content-bearing status? I think these questions belong to an exploration in yet another landscape, which is what kinds of cognition have what kinds—or no kind—of content. And where—perhaps when is equally important to ask—exactly is that content stored or distributed? And as the approach here requires, we have to ask—what are the interfaces? It might be the case that, to be meaningful, content must be interpreted by some mental processes (be they computational or otherwise) which depend on internal or external interfaces.

I talk of artificial organisms, but acknowledge that they may not be discrete objects aside from my observation and description of them as such. What we’re after are minds, which again could be thought of in terms of objects, but may be more usefully observed with other, more amorphous, layers of interpretation. Interactions and system dynamics views will probably be useful, but those would typically also involve objects, albeit possibly alternative objects, e.g. mind is this system of non-mental objects interacting.

We are on shaky ground now—we don’t know what aspects of cognition depend on what spaces and we don’t know what content is truthful, if there even is content. How can we proceed to make artificial cognition? One part of the answer is to design and explore in the larger multiverse of spaces that are available once the walls have been torn down. Another part of the answer is introspection. One problem with biological cognition is that we don’t have direct psychological introspection abilities into other minds. But with artificial systems we can build in special introspective access. The difficulty there is in the multiplicity of interpretations. Researchers definitely have to beware of the “hermeneutic hall of mirrors”19.

Once we step back and enlarge the space of possible designs, we can then choose or discover other reasons for leaning towards one end of a spectrum or the other.

One method for the design of complex systems involves diagramming areas of tightly coupled interactions20. In that light, it may be that although intelligence and consciousness are partially external, the highly dense forces due to interactions in the nervous systems of organisms are the centers that are more complex and mind-like. Whatever the case, I advocate a framework that can support and compare a wide variety of systems in various spaces.

We also may be guided by the key exploration spaces in biology. Anthropologist Melvin Konner proposes the levels of evolution, maturation, socialization and enculturation. Furthermore, the generation of living systems at each of those levels might be describable by a general algorithm of variation, self-organization, challenge and selection.21

In this post I’m dealing primarily with maturation, namely developmental processes. The essences of ecological developmental processes are22:

  • Design and development in both phylogenetic and ontogenetic spaces.
  • Iterative growth, in which information systems are modified without breaking existing functionality.
  • Holistic systems, taking into account embodied and environmental aspects of cognition.

As for self-organization, that may not be a matter of designing an organism itself. Self-organization could be just an observer-applied stance, not an inherent property of anything.

Self-organization is independent of the observer, but it is everywhere—one could describe any dynamical system as self-organizing or self-disorganizing depending on the context. However, it is not necessarily useful to interpret or model all systems as self-organizing; most natural systems fit well into a self-organization view whereas most artificial systems do not23.

So although self-organization may not be a property in the normal sense of minds, it is still of interest because if artificial organisms can be created that fit the self-organization description as naturally as biological systems, then we have more confidence that the artificial organisms are a kind of life involving biological-like mental activity.

Modular Developmental Mechanisms

Image by 🌸♡💙♡🌸 Julita 🌸♡💙♡🌸 from Pixabay

There are many developmental concepts in use by nature. Plenty of space for researchers to explore. Here I will only discuss these few general concepts:

  • Stages of growth, both programmed for the organism and for its environment.
  • Programmed physical change.
  • Programmed mental change, which may be tied to physical change.

And these will be primarily simplified here to be seen in terms of interconnected modules. An artificial organism can be composed entirely of modules (each of which is at a much larger scale than individual cells). This composition can happen by accrual, but each addition or subtraction may require integration that changes existing modules. Existing functionalities or competences should generally be maintained.

It is evident that there are a few principle types of integration of new modules during accretion:

  1. The first type consists of modules isolated from each other, but within the same body. I.e., the new module does not directly talk to other modules. However, if it uses shared resources, be they purely internal or external-facing sensors or actuators, there will have to be an arbitrator in the network at some point, which in turn is itself a module. Thus what seems to be type 1 may in fact be type 3.
  2. A variation of type 1 could be a mental module which is completely isolated from other mental modules, but which does in fact communicate with other mental modules via external information. An example would be stigmergy, in which animals communicate with each other by modifying and observing the environment.
  3. The third type consists of modules which are interfaced internally (and optionally via external means as in Type 2). The connection can be direct or indirect.

These kinds of integration apply to the context where there is a form called the body, with the context being the environment that the body is “in.” Other form-context situations could be used and have potentially different accretion integration issues—however the principle concept remains: a newly integrated module either connects directly to other modules or it does not, and it must affect some context.

We should also consider changes that are not explicit additions of new modules. There could be changes analogous to refactoring in software engineering in which modules are mutated and/or connections changed. Also, an iterative change may be the removal of a module or connection.

Some of these modular mechanism bear resemblance to brain development phenomena, such as growth by accruing cells, pruning (cell death during development)24 and forming connections. However, the biologically-inspired approach I’m talking about here abstracts away details of cellular mechanisms. Modules at a larger granularity than cells put us in the realm of psychological agents—one could imagine this system of modular development creating a society of mind25. Some cellular mechanism concepts may be useful in the future in that realm, however, such as differentiation.

I do not presently have a good solution for automatically integrating modules such as these. However, the concept of iterative development in itself makes the problem much easier. As long as one has a working, running system and adds—or subtracts—one module at a time, there is a good chance of integrating the module into the system. And at least one knows at what point the system failed horribly and can go back and try in a different way. Incremental development could even be used in the design process of an individual module. Iterative developments can be nested.

There is still one glaring problem: How does one obtain an appropriate working, running artificial organism-environment system to begin with? Well, one way is of course to start small. But then we have the problem of whether the small seed is too arbitrary to be useful down the road. And what if the developmental mechanisms are not appropriate? There are no easy answers to these questions. That is why I suggest for now to approach these issues as design problems, in addition to liberal use of iterative (aka piecemeal) development approaches.

Designing Artificial Organisms

Design disciplines such as architecture, industrial design and interaction design routinely tackle these kinds of problems—complex systems with a large number of interactions between elements and an obscurity of the context.

Of course, designers often fail, but design methods do succeed sometimes. It is an old notion that design problems attempt to make a good fit between a form and its context20. An architectural forerunner to design methods described form-context systems as ensembles of “misfits,” where a misfit is an incongruity or other force which prevents good fit between a form and its context. Thus one of the design threads in that approach is a negative process of removing the misfits.

Somehow a designer has to take abstract requirements and synthesize a form. But they can observe the systems and environments that already exist that the form is intended to fit in to. And perhaps examine systems that are similar that already exist.

One kind of design analysis suggests creating a hierarchy of systems and subsystems based on the best guess of the interconnectedness of misfits. As architectural patterns, they work when they are about systems of forces with dense internal interaction and sparse external interaction20.

These patterns resolve the inner conflicts of buildings, towns and—why I’m discussing it here—possibly natural systems, resulting in a lifelike “quality without a name.”26 A pattern is basically a relationship between a certain context, a problem (a system of forces that occurs often in that context) and a solution (a spatial configuration that enables self-organizing resolution of those forces).

From the perspective of form-context fit, there are three major problems for creating synthetic minds:

  1. How to produce an initial ensemble (or system of patterns).
  2. How to maintain equilibrium over time and changing context.
  3. How to develop that ensemble to be polymorphic to a range of future possible contexts.

Number 3 is the crucial problem for making ensembles that are cognitive in an interesting sense, for instance at the level or style of mammalian intelligence.

Polymorphism is a term coming from biology but was long ago also adopted by computer science:27

In programming language theory and type theory, polymorphism is the provision of a single interface to entities of different types or the use of a single symbol to represent multiple different types. The concept is borrowed from a principle in biology where an organism or species can have many different forms or stages.

I talked a bit about polymorphism in mental development in a previous blog post.

Fortunately, when using virtual worlds, a researcher-designer has potentially immense control for experimenting with form and context. Artificial life can be designed for certain variable systems to be removed from the ensemble quickly and perfectly. For instance, once a line between form and context is chosen, one or the other can be held constant. This allows the researcher to observe the expression over time or experimental changes in the other.

The overall approach is to grow and develop entities and their minds. Sure, evolving artificial “life” has been done many times before. But I’m talking about making that much bigger and with considerations for the underpinning philosophy. Give them (the artificial organisms) physical (or virtually physical) interaction tasks, which has also been done before but only in very limited ways. Increase the time frames and evolution depth. Try developing them at an even more basic level of how to deal with objects—what is enabled by this form in this context, and how can it develop and learn affordances?

Affordances

I mentioned Gibsonian affordances earlier. Here’s a newer definition:

An affordance is what a user can do with an object based on the user’s capabilities.

As such, an affordance is not a “property” of an object (like a physical object or a User Interface). Instead, an affordance is defined in the relation between the user and the object: A door affords opening if you can reach the handle. For a toddler, the door does not afford opening if she cannot reach the handle.

An affordance is, in essence, an action possibility in the relation between user and an object.28

Affordances give insight to the ideas here, but what about in the other direction? Could this approach demonstrate evolution of affordances in artificial creatures?

After all, we are starting out the organisms—or what appears to us as organisms—with a physical interaction task. This may be the wrong kind—or too arbitrary—but more experiments will see what happens with a wide variety of form-context interactions that are biologically inspired.

And we are starting with simple innate bodily movements in the world. Evolutionary developmental processes could keep incrementally building up the cognitive architecture—or what appears to us to be a cognitive architecture—to turn these basic object interaction phenomena into affordances which in turn may become the building blocks of knowledge.

Likewise, basic motions and architectural aspects of affordances may become conceptual metaphor mechanisms29 which also may become building blocks of knowledge. And perhaps these architectures will illuminate issues of content in basic minds, or lack thereof.

Conclusion

Image by Brian Penny from Pixabay


The truth may be out there, but probably not in a creature’s perception. This should be taken into account when trying to make artificial organisms and biologically-inspired cognitive architectures. The observer-designer is also interacting via perceptual interfaces. There is a wide range of potentially-cognitive form-context landscapes that can be created and experimented on. The iterative development approach is useful in both the researcher-observer-designer space as well as in developmental system ontogeny space.

It is not expected that any artificial organism will at any time be a blank slate—for that would mean it was unfit for its environment. The point is to always strive for a fit between form and context. And that may mean design and development of one and/or the other.

Weirdly, if a design shifts focus completely to context, then the context and form swap—in the case of cognitive architectures, that would mean, for instance, that an environment was the form and the cognitive architecture was the context. This has in the past been described as the distinction between adapting a system (e.g. a car) versus modifying an environment (e.g. the road surface)4.

Regardless of what phylogeny space it instantiated from, an artificial organism should be produced to be intelligent enough to survive in the contextual ranges it was designed for. The context also has requirements such as providing a series of stages for a developing organism.

The future holds plenty of exploration into the design of artificial organisms with analysis and synthesis experiments. The framework and modular developmental ideas introduced here are merely humble beginnings. I think perspectives and theories like this are necessary to make advances in Strong AI and/or AGI and/or a resurgence of the overlapping field of ALife. Aside from implying substrate independence, my ideas—which are largely a synthesis of concepts from many disciplines—are not the mainstream for AI thinking right now.


1    Willis, S. L., & Schaie, K. W. (2009). Cognitive training and plasticity: theoretical perspective and methodological consequences. Restorative neurology and neuroscience, 27(5), 375–389. https://doi.org/10.3233/RNN-2009-0527
2    Dawson, M.R.W & Medler, D.A. Dictionary of Cognitive Science. Feb 2010. http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Dictionary/contents/V/veridicality.html
3    Gibson, J.J. (1986). The Ecological Approach to Visual Perception. Lawrence Erlbaum Associates, Hillsdale, N.J.
4    Bowlby, J. (1982). Attachment. Basic Books, New York.
5    Hoffman, D.D. (2012). The construction of visual reality. In: Blom, J.D., Sommer, I.E.C. (eds.) Hallucinations Research and Practice, pp. 7–16. Springer, New York, NY.
6    Sollberger, M. (2008). Naive realism and the problem of causation. Disputatio 3(25), 1–19.
7    Churchland, P.S., Ramachandran, V., Sejnowski, T.J. (1994). A critique of pure vision.
In: Koch, C., Davis, J.L. (eds.) Large-Scale Neuronal Theories of the Brain, pp. 23–60. MIT Press, Cambridge, MA.
8    Mark, J., Marion, B., Hoffman, D.D. (2010). Natural selection and veridical perceptions. Journal of Theoretical Biology (266), 504–515.
9    Hoffman, D.D. (2009). The interface theory of perception: Natural selection drives true perception to swift extinction. In: Dickinson, S., Leonardis, A., Schiele, B., Tarr, M. (eds.) Object Categorization: Computer and Human Vision Perspectives, pp. 148–166. Cambridge University Press, Cambridge, UK.
10    Sloman, A. (2011). Varieties of meta-cognition in natural and artificial systems. In: Cox, M.T., Raja, A. (eds.) Metareasoning: Thinking about Thinking, pp. 307–322. MIT Press, Cambridge, MA.
11    Griffith, S., Sinapov, J., Sukhoy, V., Stoytchev, A. (2012). A behavior-grounded approach to forming object categories: Separating containers from noncontainers. Transactions on Autonomous Mental Development 4(1), 54–69.
12    Noë, A. (2004). Action in Perception. MIT Press, Cambridge, MA.
13    Shapiro, L. (2007). The embodied cognition research programme. Philosophy Compass 2(2), 338–346.
14    Varela, F.J., Thompson, E.T., Rosch, E. (1993). The Embodied Mind: Cognitive Science and Human Experience. MIT Press, Cambridge, MA.
15    Hutto, D.D., Myin, E. (2013). Radicalizing Enactivism: Basic Minds without Content. MIT Press, Cambridge, MA.
16    Brooks, R.A. (1999). Cambrian Intelligence: The Early History of the New AI. MIT Press, Cambridge, MA.
17    Connell, J.H. (1990). Minimalist Mobile Robotics: A Colony-Style Architecture for an Artificial Creature. Academic Press, Boston.
18    Jones, J.L., Seiger, B.A., Flynn, A.M. (1999). Mobile Robots: Inspiration to Implementation (2nd ed.). A.K. Peters, Natick, MA.
19    Harnad, S. (1990). Lost in the hermeneutic hall of mirrors. Journal of Experimental and Theoretical Artificial Intelligence 2, 321–327.
20    Alexander, C. (1971). Notes on the Synthesis of Form. Basic Books, Cambridge, MA.
21    Konner, M. (2010). The Evolution of Childhood: Relationships, Emotion, Mind. Belknap Press of Harvard University Press, Cambridge, MA.
22    Samuel H. Kenyon (2013). An Ecological Development Abstraction for Artificial Intelligence. Papers from the 2013 AAAI Fall Symposium
23    Gershenson, C. (2007). Design and Control of Self-organizing Systems. Ph.D. thesis, Vrije Universiteit Brussel.
24    Oppenheim, R. (1991). Cell death during development of the nervous system. Annual Review of Neuroscience 14, 453–501.
25    Minsky, M. (1986). Society of Mind. Simon & Schuster, New York.
26    Alexander, C. (1979). The Timeless Way of Building. Oxford University Press, New York.
27    Wikipedia contributors. (2023, July 18). Polymorphism (computer science). In Wikipedia, The Free Encyclopedia. Retrieved 13:18, September 17, 2023, from https://en.wikipedia.org/w/index.php?title=Polymorphism_(computer_science)&oldid=1165926454
28    Interaction Design Foundation. UX Daily: Affordances. Retrieved Sept 2023. https://www.interaction-design.org/literature/topics/affordances
29    Lakoff, G., Johnson, M. (1980). Metaphors We Live By. University of Chicago Press, Chicago.


This post first appeared on MetaDevo AI, please read the originial post: here

Share the post

Context-Form Considerations for Ecological Developmental Systems

×

Subscribe to Metadevo Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×