Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

The Mind is Not an Etch-A-Sketch

(It’s a Helicopter)

A helicopter is an assembly of forty thousand loose pieces, flying more or less in formation.

—unknown

The mind is a kluge.1

But it runs in real time in real environments and manages to survive.

Mimicking some aspects of the natural klugification of cognition could be a way to make Strong AI. And this is a way that’s not in the mainstream of research right now.

Boeing CH-47 Chinook helicopter [Tomás Del Coro from Las Vegas, Nevada, USA, CC BY-SA 2.0, via Wikimedia Commons]

Phylo-Onto / Evo-Devo

The crude division of all human attributes into ‘inherited’ and ‘acquired’ is excusable but quite unreasonable. Even in the simple models of behaviour we have described, it is often quite impossible to decide whether what the model is doing is the result of its design or of its experience. Such a categorisation is in fact meaningless when use influences design, and design use.

—W. G. Walter2


After 14 years of continuous conditioning and observation of thousands of animals, it is our reluctant conclusion that the behavior of any Species cannot be adequately understood, predicted, or controlled without knowledge of its instinctive patterns, evolutionary history, and ecological niche.

K. Breland & M. Breland3

A creature can learn new things during its lifetime (ontogeny), and its species can learn new things during evolutionary history (phylogeny).

These dimensions of Learning can have some dramatic differences, for instance:

  • Ontogeny—all that is learned is lost unless the creature has developed the origins of culture, such as written language.
  • Phylogeny—evolution figures out new ways to use old structures, and makes more of some structures and more of new superstructures, etc.

Another space to consider is external versus internal behavior: Are we talking about learning of a biological creature as we can tell by its behavior during its ontogenetic trajectory? Are we talking just about the abstract mental concepts happening internally, without much care for the environment or the life of the creature in any real context? I say we should always consider both.

Aaron Sloman & Jackie Chappell’s proposal for an altricial-precocial spectrum for robots4 is directly related to phylo-onto / evo-devo considerations. Precocial organisms have lots of phylogenetic learning through evolution but hardly any ontogenetic learning, whereas altricial (aka nidicolous) organisms correspond to lots of ontogenetic learning.

Any given organism actually has a mix of precocial and altricial capabilities. Most species that have shifted to ontogenetic learning as a key feature still have primitive, evolutionarily old phylogenetically-learned information / skills—they may be helpless at birth, but not blank slates.

Even species with the most-advanced altricial capabilities (such as humans) still need phylogenetically-learned systems of learning, or else how could they start learning so much after being instantiated? As Konrad Lorenz said:5

everything learned must have as its foundation a phylogenetically provided program if, as they actually are, appropriate species-preserving behavior patterns were to be produced

Of course, the foundation may include new learning programs that were learned by the phylogenetic program, and so on.

The realization of precocial vs. altricial abilities leads to questions about innate knowledge structures and bootstrapping. The altricial capability of rapid discontinuous learning is probably not the same thing as the slower phylogenetically-older reinforcement learning.4 Also, these mechanisms should be able to operate not just on representations of external reality but also purely internal mental structures. These ideas may be complementary to Piaget’s theories of child development.

Which chunks an altricial individual learns will be influenced by physical actions possible for its body, the environment and its affordances, and the individual’s history. These factors could produce different kinds of understanding and representation of space, time, motion, causality and social relations in different species, or in similar individuals in different environments…If all this is correct, after evolution discovered how to make physical bodies that grow themselves, it discovered how to make virtual machines that grow themselves.

Sloman & Chappell6

A possible candidate for the altricial-associated abstract chunks for learning (and bootstrapping to higher abilities) are frames.7 We might be born with a small set of frames, which are abstract stereotypes of the things we most need to know in order to survive and acquire more advanced knowledge. The more advanced knowledge is: more-detailed knowledge, more frames, frame-arrays, more sub-frames (possible recursion), frame-based semantic networks, etc.

One Substrate to Rule Them All, and in the Darkness Bind Them

[Etcha, CC BY-SA 3.0, via Wikimedia Commons]

It appears to me that most research and development of learning agents to date do not account for nor exploit the model provided in nature of multi-timescale, multi-abstraction learning. We have a lot of applications and robots with a “born yesterday” complex. This includes Machine Learning such as Deep Learning and Transformers where after training the model is static and “born yesterday” or whatever day in the past it was baked.

Sometimes this is appropriate, as part of an online learning system of adaption. But to constantly start from scratch is somewhat ridiculous if one expects to make human-similar intelligence. Or intelligence similar to any animal.

It’s tempting to think we’re like an Etch-A-Sketch—made of a single layer that can host complex patterns. And perhaps as a secondary part of the metaphor: in a quick shake completely erased.

But in both phylogenetic and ontogenetic spaces we aren’t like that. Intelligent life builds on previous structures. And no matter what single type of composition you might highlight, it seems our minds are better described with multiple levels—with various types of psychological scaffolding, multiple levels of abstraction, etc.—as opposed to a complicated pattern in a homogeneous 2-dimensional substrate—what I’m calling an Etch-A-Sketch theory.

The mind is not an Etch-A-Sketch. One might think I’m arguing against a straw man but you could probably think of many popular approaches that try to simplify cognition into something close to an Etch-A-Sketch theory.

So there’s two intermingled concepts here:

  1. Multi-layer and/or multi-substrate cognitive architectures.
  2. Each piece of a cognitive architecture has a multi-dimensional history as a result of learning and development and evolution.

Designing Evo-Devo Minds

Organisms or products that are slightly more complex than any before sometimes succeed in the ecology or the marketplace, and thus raise the upper limit. Paleontological and historical records can be scanned for this upper limit, which mostly rises over time. There are reversals in both records: notably mass extinctions and civilization collapses. But, after a recovery period, progress resumes, often faster than before. If complex entities succumb to disaster, many of their component innovations may yet survive somewhere. Classical learning weathered the collapse of Roman civilization in the remote Islamic world. Some inactive DNA sequences seem to be archives of ancestral traits. Extincted large organisms may leave much of their heritage behind in smaller relatives, who can rapidly “re-evolve” size and complex adaptations by simple mutations in regulator genes. The rexpression of old good ideas in odd combinations often initiates an explosion of innovation. Such happened culturally in the Renaissance and biologically in the Paleocene, when birds and mammals ran riot in the post-dinosaur world.

Hans Moravec8

Regardless of the degree of recapitulation—repeating evolutionary stages in individual growth / development—in certain organisms, an artificial system can certainly be designed to transform phylogenetic knowledge into ontogenetic structures or to configure generic historically-proven templates to a specific robot when it is born.

Birth of a robot is another aspect in which artificial systems can behave very unnaturally—a robot can be born many times, copied, paused, etc. Indeed, one could even experiment with brain-swapping.

Likewise, regardless of whether animal behavior can actually contribute to genetic variation through interactions with the environment in which future natural selection takes place (phylo- or geno-copying), a simulation can be arranged for that to happen. Indeed, one could design an agent and environment in which there are direct causal changes to genotypes from changes in behavior, a virtual Lamarkianism or Baldwin Effect.

We should also be concerned with a somewhat different phylogenetic tree, which is that of dev (development) trees and configuration management (e.g. software version control). Within those may be low-level program / language-specific hierarchies that are in their own way similar to phylogenetic trees (perhaps we should call these sub-trees of the development tree taxa). Note I’m using the word “tree” here very loosely.

Skull of Arctocyon primaevus, a Paleocene mammal [Ghedo, Public domain, via Wikimedia Commons]

There’s many ways a human designer (or the human’s automated tools) could contribute and cause cladogenesis or speciation—in other words, generation of clades or new species. Clades could be generated within the simulation environment, inclusive of the entire architecture itself, and/or at the level of configuration management. I don’t know how exactly this would be managed, represented, visualized, etc. but at the very least it all must be careful backed up and documented constantly.

An artificial world could do strange things not possible (that I know of) in the natural world, such as experimenting with an organism that can reproduce hundreds or thousands of progeny within its own lifetime, and furthermore give this species a lifetime that is long enough so that an individual can live during the birthings of n-levels of grand children of n in the dozens or hundreds.

Then we have so many simultaneously living relations that it would be useful perhaps to have a dynamic, online ontogeny tree, which could possibly show various characteristics not shown in the species phylogeny, especially if you allow other extremities or self-modifications not possible in the natural world.

In a simulation a creature could be allowed the capability to modify its own genome—such a virtual Lamarckian world could show improvements on, or the dangers of, unnatural evolution.

where there is the need for learning there is room for error

Edwin Hutchins9

How errors are recognized and feed back into the various systems and development / learning dimensions is probably very important as well. Something to discuss in the future.

Also I haven’t dived into the ecological aspects of these minds as ontogenetic systems existing in larger dynamic environmental systems. I’ve talked about that in previous posts and will continue to explore that in future posts.


1    Marcus, G. (2008). Kluge: The Haphazard Construction of the Human Mind. Houghton Mifflin Harcourt.
2    W.G. Walter, The Living Brain, 1953, p. 271
3    K. Breland & M. Breland, “The Misbehavior of Organisms,” Am. Psychologist, 1961, reprinted in R.W. Hendersen (Ed.), Learning in Animals, Hutchinson Ross, 1982
4    A. Sloman & J. Chappell, “The Altricial-Precocial Spectrum for Robots,” IJCAI-05, 2005. http://www.cs.bham.ac.uk/research/cogaff/alt-prec-ijcai05.pdf
5    K.Z. Lorenz, King Solomon’s Ring. New York: Time, 1962 (1952).
6    A. Sloman & J. Chappell, “Altricial self-organising information-processing systems,” Int. Workshop on The Grand Challenge in Non-Classical Computation, April, 2005. http://www.cs.bham.ac.uk/research/cogaff/summary-gc7.pdf
7    M. Minsky, Society of Mind. Simon & Schuster, 1986
8    H. Moravec, “Robots, Re-Evolving Mind,” Cerebrum, vol.3, no.2, pp.34-49, 2001. http://www.frc.ri.cmu.edu/~hpm/project.archive/robot.papers/2000/Cerebrum.html
9    Hutchins, E. (1995). Cognition in the Wild. MIT Press.


This post first appeared on MetaDevo AI, please read the originial post: here

Share the post

The Mind is Not an Etch-A-Sketch

×

Subscribe to Metadevo Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×