Re: computability, what computers can do

Francis Heylighen (fheyligh@VNET3.VUB.AC.BE)
Wed, 13 Sep 1995 16:07:07 +0100


Don:
>> Chaitin's work is somewhat mind-boggling in it's impact on what we can
>> expect EVER to come from computers. Certainly, we can not expect them to
>> reproduce themselves, given these limits.

Bruce:
>All this theory is *only* true of self-contained programs that do
>not interact with an environment. In this way they correspond to
>the therodynamically closed systems of classic physics (which are
>equally limited). To illustrate this, there is even a considerable
>difference if you allow a "random oracle" as an allowable part of
>the system, then most of these results do not hold.

I agree with Bruce's general view that there is no essential difference
between computer systems and organic systems as far as possibilities for
autonomy, self-organization, autopoiesis, replication, etc. are concerned.
I also like his analogy with "open vs. closed" thermodynamic systems,
although I have always argued that you don't need an open system to have
some form of self-organization.

The argument for the existence of self-organization in closed and even
"computational" systems is very simple and has been made a long time ago by
the founding father of cybernetics, W. R. Ashby, who formulated the
principle as "every isolated, determinant dynamic system obeying unchanging
laws will develop organisms that are adapted to their environments."(Ashby
W.R. (1962): "Principles of the Self-Organizing System", in: Principles of
Self-Organization, von Foerster H. & Zopf G.W. (eds.), (Pergamon, Oxford),
p. 255-278.). If the argument applies to closed, deterministic systems, it
applies even more to open, stochastic systems, independently of whether
these are "computational" or "organic".

I have been debating this earlier with the representatives of the Rosen
school who do make an essential distinction between "organisms" and
"computers". My point is that because existing formal systems or computer
simulations have a number of obvious limitations, one should not jump to
the conclusion that computer systems will never be able to transgress these
limitations in any form. The "Goedel" type arguments against AI or Alife
have always struck me as very artificial constructions, where a very
specialized set-up is used to prove a very specific point, namely that a
particular formal system is not capable to do one particular thing, while
implying that non-formal systems (e.g. the human mind) don't have any such
intrinsic limitations. Especially the latter point strikes me as rather
absurd.

This does not mean that lots of AI or Alife research is not overlooking
essential points. But those points should be made without immediately
throwing out the baby with the bathwater, by claiming that all computer
models are by definition bad. Good computer models should allow an
"open-ended evolution", where there are no a priori fixed building blocks,
constraints, or fitness functions, and where different emerging "creatures"
can co-evolve, driving each other's evolution in a priori unforeseeable
directions, allowing the emergence of new systems at a higher level of
complexity. I quote from an earlier post I sent to PRNCYB:

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
It seems to me that most of the arguments that people from the Rosen/
Kampis /Pattee/ Cariani (Peter, are you there?) school make against the
possibility of artificial life do not apply to such "open-ended"
environments:

* The "infinite regress" (if I have understood it well) is already there,
since creature feeds on creature feeds on creature feeds on ... There is
no bottom level.

* The argument that any simulation is necessarily a finite, closed system
locked in the restricted space of a computer memory does not seem very
meaningful, as the solar system is also a closed finite system dependent on
the energy dissipated by the sun, yet it is all we need to sustain organic
life.

* Also the idea that computer programs are necessarily deterministic,
formal systems restricted by all kind of Goedelian principles does not seem
relevant. If you don't believe the "random generators" used to produce
variation are really random, just use an external source of noise (e.g.
Brownian motion of air molecules, or the values of the stock market, or
whatever) and feed that into the computer to generate variation. This will
not make any observable difference to the system's evolution.

* "Software" vs. "hardware" is not essential either: just use the "soft"
alife program to steer a "hard" robot.

Some more arguments can be found in my 1991 paper "Modelling Emergence":

ftp://is1.vub.ac.be/pub/projects/Principia_Cybernetica/Papers_Heylighen/Mode
lling_Emergence.txt
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

>> ......of course our brains and the whole process of evolution are
>> vastly more complicated than formal networks or toy evolutionary systems."
>> I won't take up more space with quotes.
>
>True, so far, but the whole inter-connected system
>(computers+networks+internet) is heading that way!

Again, I agree with Bruce: if you include the whole computer network into
your self-organizing system, you automatically get an "open-ended"
environment of an inimaginable complexity allowing all kinds of emergent
structures that no single person or computer could predict.

Finally, about "organizational closure", as in autopoietic systems
producing themselves via a cycle of processes producing subsequent
ingredients. It has been claimed during this discussion (perhaps by Jeff, I
am not sure), that no computer system or formal system could develop
closure. On the contrary, closure is one of the easiest things to model.
(in fact, Ashby's argument could be reinterpreted just in terms of the
automatic emergence of closure). Suppose you have a deterministic (but not
reversible process) where every next state is unambiguously determined by
the previous state:

A -> B -> C -> D -> E -> F ->...

It suffices that one of the states that the system reaches leads to a state
through which the system has already passed, e.g. E -> C. From that moment
on the system will cycle indefinitely between C and E:

C -> D -> E -> C -> D -> E -> C -> ...

The dynamical system has reached a cyclical attractor. Now, reinterpret the
state transition X -> Y as a causation or a production of Y by X. The cycle
above is now "closed to efficient causation". Once the system is in that
cycle it is impossible for the outside observer to determine how the cycle
itself was caused, since the information that the system entered the cycle
through A and B has been irreversibly lost. It might as well have entered
the cycle through K -> L -> E, T -> S -> E or P -> Q -> D, the result
would have been the same.

The origin of autopoietic systems, hypercycles or other "closed"
organizations associated with life could be conceived like that. Just
interpret "X -> Y" as "molecule X produces molecule Y (possibly with the
help of molecules, Z, Q, R, ...)". No need to postulate strange, non-causal
events to explain the emergence of autonomous, self-sustaining, cyclic
processes.

________________________________________________________________________
Dr. Francis Heylighen, Systems Researcher fheyligh@vnet3.vub.ac.be
PESP, Free University of Brussels, Pleinlaan 2, B-1050 Brussels, Belgium
Tel +32-2-6292525; Fax +32-2-6292489; http://pespmc1.vub.ac.be/HEYL.html