Re: The Spirit in the Machine

Don Mikulecky (mikulecky@GEMS.VCU.EDU)
Fri, 18 Apr 1997 16:09:48 -0400


--------------D387B39177ED1EE2228DAD4C
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Don Mikulecky replies:
(Mikulecky@gems.vcu.edu)
http://views.vcu.edu/complex/

Bruce Edmonds wrote:

> > From a layperson perspective, I don't think machines will ever be
> more
> > than what their designers have put into them. Learning can be put
> in,
> > but how can information that we don't understand be extended into
> a
> > machine's system.
>
> Answer 1: from the environment they are situated in. Say we gave a
>
> machine a simple learning mechanism and sent it out (e.g. a web
> robot),
> then it would learn information we did not put in, because it has
> access
> to different information.

The question is "what kind of information?" I submit it can get all the
syntactic information you wish (algorithmic) but no semantic
information.

>
>
> Answer 2: Even for self-contained purely-formal tasks this is
> possible. For example I set a genetic programming algorithm that
> evolves a population of purely formal programs to solve a difficult
> problem (e.g. a symbolic integration). The solution it finds could
> be
> completely new to me - I did not put the solution in to it, it
> emerged.
>
> The naive (information in >= information out) equation misses the
> point
> about the possible intricacies of intention. In this case I
> intended
> that the system come up with a solution I could not foresee.
> Intention
> works at different levels: one can intend (level A) the unintended
> (level B).
>
> In some abstract sense the information comes from the random-number
> stream plus the initial configuration, but a shannon-type
> formulation of
> information is not adequate here as the output has *meaning* (note
> that
> the technique works as well with a pseudo-random number generator as
> a
> real-world random number stream).

Only for purely formal tasks I would say. The "getting" of information
and the
utilizing and storage of it are different matters. Only syntactic
information could be processed. This is the mistake von Neuman made
with his universal constructor.

>
> > If the materials that make up these machines were
> > good processors of this type of knowledge, then why did biological
>
> > lifeforms pick it up and evolve rather than resins etc.
>
> Because some mechanisms are sufficiently delicate and brittle that
> the
> requisite support mechanisms for it must evolve first, but after
> these
> are available it is free to evolve (after all we *did* evolve them).

Yes...the one's we evolved are mechanisms. Organisms are much more than
mechanisms and that's why they are so different. Organisms can deal
with semantic information...they are not mere machines.

>
>
> ---------------------------------------------------
> Bruce Edmonds,
> Centre for Policy Modelling,
> Manchester Metropolitan University, Aytoun Bldg.,
> Aytoun St., Manchester, M1 3GH. UK.
> Tel: +44 161 247 6479 Fax: +44 161 247 6802
> http://bruce.edmonds.name

Best regards,
Don Mikulecky

--------------D387B39177ED1EE2228DAD4C
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

Don Mikulecky replies:
(Mikulecky@gems.vcu.edu)
http://views.vcu.edu/complex/

Bruce Edmonds wrote:

> From a layperson perspective, I don't think machines will ever be more
> than what their designers have put into them.  Learning can be put in,
> but how can information that we don't understand be extended into a
> machine's system.

Answer 1:  from the environment they are situated in.  Say we gave a
machine a simple learning mechanism and sent it out (e.g. a web robot),
then it would learn information we did not put in, because it has access
to different information.
The question is "what kind of information?"  I submit it can get all the syntactic information you wish (algorithmic) but no semantic information.


Answer 2:  Even for self-contained purely-formal tasks this is
possible.  For example I set a genetic programming algorithm that
evolves a population of purely formal programs to solve a difficult
problem (e.g. a symbolic integration).  The solution it finds could be
completely new to me - I did not put the solution in to it, it emerged.

The naive (information in >= information out) equation misses the point
about the possible intricacies of intention.  In this case I intended
that the system come up with a solution I could not foresee.  Intention
works at different levels: one can intend (level A) the unintended
(level B).

In some abstract sense the information comes from the random-number
stream plus the initial configuration, but a shannon-type formulation of
information is not adequate here as the output has *meaning* (note that
the technique works as well with a pseudo-random number generator as a
real-world random number stream).


Only for purely formal tasks I would say.  The "getting" of information and the
utilizing  and storage of it are different matters.  Only syntactic information could be processed.  This is the mistake von Neuman made with his universal constructor.

> If the materials that make up these machines were
> good processors of this type of knowledge, then why did biological
> lifeforms pick it up and evolve rather than resins etc.

Because some mechanisms are sufficiently delicate and brittle that the
requisite support mechanisms for it must evolve first, but after these
are available it is free to evolve (after all we *did* evolve them).

Yes...the one's we evolved are mechanisms.  Organisms are much more than mechanisms and that's why they are so different.  Organisms can deal with semantic information...they are not mere machines.


---------------------------------------------------
Bruce Edmonds,
Centre for Policy Modelling,
Manchester Metropolitan University, Aytoun Bldg.,
Aytoun St., Manchester, M1 3GH. UK.
Tel: +44 161 247 6479  Fax: +44 161 247 6802
http://bruce.edmonds.name
  Best regards,
Don Mikulecky

--------------D387B39177ED1EE2228DAD4C--