[Fwd: Free will]

mikuleck (mikuleck@HSC.VCU.EDU)
Thu, 11 Sep 1997 09:49:39 -0400


This is a multi-part message in MIME format.
--------------5E5812284F3C3E59EE0F95C7
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

--------------5E5812284F3C3E59EE0F95C7
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Return-Path: <mikuleck@hsc.vcu.edu>
Received: from hsc.vcu.edu ([128.172.77.126]) by dragon.vcu.edu
(Netscape Mail Server v2.02) with ESMTP id AAA1150;
Thu, 11 Sep 1997 08:29:26 -0400
Message-ID: <3417E5F6.EFBB8ABE@hsc.vcu.edu>
Date: Thu, 11 Sep 1997 08:37:11 -0400
From: mikuleck@hsc.vcu.edu (mikuleck)
Organization: VCU Dept of Physiology
X-Mailer: Mozilla 4.02 [en] (Win95; I)
MIME-Version: 1.0
To: walt@anice.net.ar
CC: Multiple recipients of list PRNCYB-L <PRNCYB-L@BINGVMB.BITNET>
Subject: Re: Free will
References: <9709102323.AA10766@venus.vcu.edu>
Content-Type: multipart/mixed; boundary="------------533DAF12EF74E558F8B4B170"
X-Status:
X-IMAP-Date: 11-Sep-1997 09:39:23 +0000
X-UID: 86

This is a multi-part message in MIME format.
--------------533DAF12EF74E558F8B4B170
Content-Type: multipart/alternative;
boundary="------------C8CFE7070C5C47CCABF525CB"

--------------C8CFE7070C5C47CCABF525CB
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Don Mikulecky replies:
http://views.vcu.edu/~mikuleck/

Walter Fritz wrote:

> Up to now there are eight answers to my original message about free
> will. Thanks to you all. This should give me the chance to make a
> number of comments.
>
> > The comparison of human thought with artificial neural networks
> > seems like the worst form of reductionism! Clearly you don't think > this
> line of speculation has any chance of going anywhere?
>
> First of all I am not talking about artificial neural networks. I am
> talking about artificial intelligent systems. These are computer
> systems having their own main objective, they have senses, they create
> a representation of the present situation, they choose an appropiate
> action and they act. At the start of the run they do not have any
> experience. They learn during their run from the environment and with
> time are able to react reasonably to the present situation. I have
> build several (You may wish to look up details in
> http://www.anice.net.ar/intsyst/artis.htm)

yes, I realized that...just not bothering to name all the methods of AI. They
are all equally inadequate. This is what Rosen calls "mimetics". They are all
based on the Turing test and Church's Thesis which he very effictively
discredited a long time ago.

>
>
> Why do you think that a comparison of the activity of an artificial
> system to that of a natural system (human thought) is not reasonable?
> I am not comparing the physical systems, just the functions. Both are
> systems and both have the same functions. ("function" as used in value
> analysis; i.e. the way they work).
>
> This is not "a line of speculation". It is an observation of the
> functioning of artificial brain processes.

To call it an "artificial brain" is SURELY speculation!

> This is the beautiful
> thing. In a human brain we can hardly observe how it functions. But in
> an artificial brain we can observe how it gathers experiences, how it
> abstracts from two experiences, how it makes concepts and how it
> chooses how to act.

Yes...like the guy looking under the lampost for his lost keys.

>
>
> > Initial conditions of the system is a computer with a working
> > IS-software plus information on expected outcomes from various
> > options.
>
> This is not quite right. The initial conditions are a computer with a
> working IS-software. But not "information on expected outcomes"
>
> The artificial IS starts with NO information about its environment,
> none at all. It gains information (experience) during its many runs.
> The info is stored in a memory file. But that is not all. The IS makes
> generalizations and abstractions. Finally when reacting it applies the
> information it has. Supposing it had LEARNED the concept triangle and
> the concept two. When asked to draw two triangles the IS takes info
> from two different experiences and acts.
> I have found that it is difficult for a human being to predict which
> combination it will do this time to come up with a reasonable answer.

Yes, and all this misses the point. It is merely a machine and the brain is
not.

>
>
> > the final state will be either pre-determined...
>
> I suppose that for an omniscient being the final state may be
> predetermined, but for a human being the state is not knowable. (See
> chaos theory) In a complex system, in which many parts interact, it
> may not be possible, FOR US, to predict an outcome.
> It looks to me that the outcome is indeterminate if we cannot
> determine it. After all we have to look at things from our point of
> view.
>
> > The problem is to understand the origin of the will
>
> In the IS the "will" is something build in. It always tries to achieve
> its build in main objective. It may use subobjectives to reach its
> main objective. This "trying" is the function "will" performs in a
> human.
>
> > Roger Penrose seems to think that quantum mechanical uncertainties
> > occuring in the microtubules of neurons where their effects are
> > amplified can give some basis for free will. So on that perspective >
> naturalism is not incompatible with some sort of free will. Dan
> > Dennett's model of consciousness provides another possible
> > naturalistic mechanism. Perhaps'Gerald Edelman's ETNGS also does so.
>
> Here I would like to contribute my experience. I and others tried to
> build artificial ISs which always choose the action it had determined
> to be the best.(based on its incomplete experience) We have found that
> that is not a good strategy. In LEARNING to understand and play games
> a certain level is reached. Then the system decides it already knows
> all about this game and just does what it consideres best. It does not
> continue to learn possible alternative and better actions.
> So the best method seems to be to have it select a list of plausible
> moves, but then not to choose the best move but to choose at random
> from this list of plausible moves, but choosing the better moves more
> often. (By the way it has to learn what a "game" is, It has to deduce
> permitted moves and good moves. All this from interactions with a
> human)
>
> > Physics tries to find a cause for everything
>
> I have quite a bit of trouble with the concept "cause". It seems to me
> we are using the same word for two quite distinct concepts. Below I
> repeat some thoughts from http://www.anice.net.ar/intsyst/correlat.htm

The causality that matters here is Aristotelean...the four "becauses" need to be
furnished. Then the machine-like qualities of what you describe become crystal
clear.

>
>
> Many people say that: "The fact that A precedes B, does not
> necessarily mean that A is the cause of B." For instance, it is not
> true that: "A certain star rises above the horizon -- this means that
> Spring is here. The star caused Spring." To "cause" means something
> else then just precedence in time.
>
> If we think about the above paragraph, we see that there are really
> two distinct cases. If A is an intelligent system who has an objective
> and acts to produce B (a change in the environment), then A truly
> caused B (this is a case of cause1). For instance, a person is thirsty
> and drinks a glass of water. Here the person (A) is the cause1 of the
> glass being empty (B). We use the word "cause1" to differentiate the
> concept related to the word "cause" from a different one, namely
> "cause2." Both concepts are "causes"; they are similar but not
> exactly the same concept.
>
> When we consider happenings in nature where no intelligent system
> intervenes, the case is different. Here we only have a statistical
> correlation. For instance, in 80% of the cases (or 99.99% of the
> cases) when A occurs, then B occurs later. Here we can say: A is the
> cause2 of B. This is even more certain if we have many examples where
> B occurred and was not preceded by A. In other words, we can talk
> about cause2 if there is a strong correlation of A with B, and a weak
> correlation of B with A. Here A and B can both be either structures or
> transformations; that means, they can be either objects or
> occurrences. An example of cause2 is: The wind causes2 movement of the
> leaves of a tree. In 99% or more of the cases whenever a wind rises
> (A), the leaves of the palm tree move (B). However sometimes the
> leaves of the palm tree move (if a horse rubs itself against it) and
> no wind rises. (A weak correlation B -> A.) Therefore the movement of
> the leaves is not the cause2 of the rising of the wind, but rather the
> wind is the cause2 of the movement of the leaves. This is quite
> obvious. Here we talk about cause2 because the wind is not an
> intelligent system; it has no objective and no selection of actions.
> The wind really does not "act", it just occurs.
>
> There are also other cases of correlation: There is a strong
> correlation of A with B and also a strong correlation of B with A: a
> reciprocal correlation. For instance, a moving electric charge is
> present when there is a changing magnetic field and a changing
> magnetic field is present when there is a moving electrical
> charge. (But it doesn't seem right to say one "causes" the other.) Or:
> The sun dips below the horizon, night is here. Or: Night is starting
> and the sun goes below the horizon.
>
> If more factors are involved, say A, B, C and D, we can even have a
> circular correlation. In all cases, when acorrelation nears 100% we
> call it a natural law.
>
> It appears to be useful if we divide the abstract concept for "cause"
> into two concrete concepts: "cause1" if there was an act of volition,
> and "cause2" if it is just a case of statistical correlation,
> occurring in nature without the involvement of an intelligent system.
> Really, in cases of cause2 and to avoid confusion, it would be better
> to talk only of "ocurrence" instead of "cause" and "effect."
>
> Up to here I repeated the Internet page. Sorry that it was somewhat
> lengthly, but sometimes making an explanation short is difficult.
>
> I hope the above has helped to clarify some points
>
> Best regards and clear thinking
>
> Walter Fritz

Respectfully,Don Mikulecky

--------------C8CFE7070C5C47CCABF525CB
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

Don Mikulecky replies:
http://views.vcu.edu/~mikuleck/

Walter Fritz wrote:

Up to now there are eight answers to my original message about free
will. Thanks to you all. This should give me the chance to make a
number of comments.

> The comparison of human thought with artificial neural networks
> seems like the worst form of reductionism!  Clearly you don't think > this
 line of speculation has any chance of going anywhere?

First of all I am not talking about artificial neural networks. I am
talking about artificial intelligent systems. These are computer
systems having their own main objective, they have senses, they create
a representation of the present situation, they choose an appropiate
action and they act. At the start of the run they do not have any
experience. They learn during their run from the environment and with
time are able to react reasonably to the present situation. I have
build several (You may wish to look up details in
http://www.anice.net.ar/intsys t/artis.htm)

yes, I realized that...just not bothering to name all the methods of AI.  They are all equally inadequate.  This is what Rosen calls "mimetics".  They are all based on the Turing test and Church's Thesis which he very effictively discredited a long time ago.
 

Why do you think that a comparison of the activity of an artificial
system to that of a natural system (human thought) is not reasonable?
I am not comparing the physical systems, just the functions. Both are
systems and both have the same functions. ("function" as used in value
analysis; i.e. the way they work).

This is not "a line of speculation". It is an observation of the
functioning of artificial brain processes.

To call it an "artificial brain" is SURELY speculation!
This is the beautiful
thing. In a human brain we can hardly observe how it functions. But in
an artificial brain we can observe how it gathers experiences, how it
abstracts from two experiences, how it makes concepts and how it
chooses how to act.
Yes...like the guy looking under the lampost for his lost keys.
 

> Initial conditions of the system is a computer with a working
> IS-software plus information on expected outcomes from various
> options.

This is not quite right. The initial conditions are a computer with a
working IS-software. But not "information on expected outcomes"

The artificial IS starts with NO information about its environment,
none at all. It gains information (experience) during its many runs.
The info is stored in a memory file. But that is not all. The IS makes
generalizations and abstractions. Finally when reacting it applies the
information it has. Supposing it had LEARNED the concept triangle and
the concept two. When asked to draw two triangles the IS takes info
from two different experiences and acts.
I have found that it is difficult for a human being to predict which
combination it will do this time to come up with a reasonable answer.

Yes, and all this misses the point.  It is merely a machine and the brain is not.
 

> the final state will be either pre-determined...

I suppose that for an omniscient being the final state may be
predetermined, but for a human being the state is not knowable. (See
chaos theory) In a complex system, in which many parts interact, it
may not be possible, FOR US, to predict an outcome.
It looks to me that the outcome is indeterminate if we cannot
determine it. After all we have to look at things from our point of
view.

> The problem is to understand the origin of the will

In the IS the "will" is something build in. It always tries to achieve
its build in main objective. It may use subobjectives to reach its
main objective. This "trying" is the function "will" performs in a
human.

> Roger Penrose seems to think that quantum mechanical uncertainties
> occuring in the microtubules of neurons where their effects are
> amplified can give some basis for free will. So on that perspective >
 naturalism is not incompatible with some sort of free will.  Dan
> Dennett's model of consciousness  provides another possible
> naturalistic mechanism. Perhaps'Gerald Edelman's ETNGS also does so.

Here I would like to contribute my experience. I and others tried to
build artificial ISs which always choose the action it had determined
to be the best.(based on its incomplete experience) We have found that
that is not a good strategy. In LEARNING to understand and play games
a certain level is reached. Then the system decides it already knows
all about this game and just does what it consideres best. It does not
continue to learn possible alternative and better actions.
So the best method seems to be to have it select a list of plausible
moves, but then not to choose the best move but to choose at random
from this list of plausible moves, but choosing the better moves more
often. (By the way it has to learn what a "game" is, It has to deduce
permitted moves and good moves. All this from interactions with a
human)

> Physics tries to find a cause for everything

I have quite a bit of trouble with the concept "cause". It seems to me
we are using the same word for two quite distinct concepts. Below I
repeat some thoughts from http://www.anice.net.ar/int syst/correlat.htm

The causality that matters here is Aristotelean...the four "becauses" need to be furnished.  Then the machine-like qualities of what you describe become crystal clear.
 

Many people say that: "The fact that A precedes B, does not
necessarily mean that A is the cause of B." For instance, it is not
true that: "A certain star rises above the horizon -- this means that
Spring is here. The star caused Spring." To "cause" means something
else then just precedence in time.

If we think about the above paragraph, we see that there are really
two distinct cases. If A is an intelligent system who has an objective
and acts to produce B (a change in the environment), then A truly
caused B (this is a case of cause1). For instance, a person is thirsty
and drinks a glass of water. Here the person (A) is the cause1 of the
glass being empty (B). We use the word "cause1" to differentiate the
concept related to the word "cause" from a different one, namely
"cause2." Both concepts are "causes"; they are similar but not
exactly the same concept.

When we consider happenings in nature where no intelligent system
intervenes, the case is different. Here we only have a statistical
correlation. For instance, in 80% of the cases (or 99.99% of the
cases) when A occurs, then B occurs later. Here we can say: A is the
cause2 of B. This is even more certain if we have many examples where
B occurred and was not preceded by A. In other words, we can talk
about cause2 if there is a strong correlation of A with B, and a weak
correlation of B with A. Here A and B can both be either structures or
transformations; that means, they can be either objects or
occurrences. An example of cause2 is: The wind causes2 movement of the
leaves of a tree. In 99% or more of the cases whenever a wind rises
(A), the leaves of the palm tree move (B). However sometimes the
leaves of the palm tree move (if a horse rubs itself against it) and
no wind rises. (A weak correlation B -> A.) Therefore the movement of
the leaves is not the cause2 of the rising of the wind, but rather the
wind is the cause2 of the movement of the leaves. This is quite
obvious. Here we talk about cause2 because the wind is not an
intelligent system; it has no objective and no selection of actions.
The wind really does not "act", it just occurs.

There are also other cases of correlation: There is a strong
correlation of A with B and also a strong correlation of B with A: a
reciprocal correlation. For instance, a moving electric charge is
present when there is a changing magnetic field and a changing
magnetic field is present when there is a moving electrical
charge. (But it doesn't seem right to say one "causes" the other.) Or:
The sun dips below the horizon, night is here. Or: Night is starting
and the sun goes below the horizon.

If more factors are involved, say A, B, C and D, we can even have a
circular correlation. In all cases, when acorrelation nears 100% we
call it a natural law.

It appears to be useful if we divide the abstract concept for "cause"
into two concrete concepts: "cause1" if there was an act of volition,
and "cause2" if it is just a case of statistical correlation,
occurring in nature without the involvement of an intelligent system.
Really, in cases of cause2 and to avoid confusion, it would be better
to talk only of "ocurrence" instead of "cause" and "effect."

Up to here I repeated the Internet page. Sorry that it was somewhat
lengthly, but sometimes making an explanation short is difficult.

I hope the above has helped to clarify some points

Best regards and clear thinking

Walter Fritz

Respectfully,Don Mikulecky
  --------------C8CFE7070C5C47CCABF525CB-- --------------533DAF12EF74E558F8B4B170 Content-Type: text/x-vcard; charset=us-ascii; name="vcard.vcf" Content-Transfer-Encoding: 7bit Content-Description: Card for Don Mikulecky Content-Disposition: attachment; filename="vcard.vcf" begin: vcard fn: Don Mikulecky n: Mikulecky;Don org: Department of Physiology, MCV/VCU email;internet: mikuleck@hsc.vcu.edu title: Professor note: First International Laboratory for the Application of Analysis Situs to P hysiology(FILASAP) x-mozilla-cpt: ;0 x-mozilla-html: FALSE version: 2.1 end: vcard --------------533DAF12EF74E558F8B4B170-- --------------5E5812284F3C3E59EE0F95C7 Content-Type: text/x-vcard; charset=us-ascii; name="vcard.vcf" Content-Transfer-Encoding: 7bit Content-Description: Card for Don Mikulecky Content-Disposition: attachment; filename="vcard.vcf" begin: vcard fn: Don Mikulecky n: Mikulecky;Don org: Department of Physiology, MCV/VCU email;internet: mikuleck@hsc.vcu.edu title: Professor note: First International Laboratory for the Application of Analysis Situs to P hysiology(FILASAP) x-mozilla-cpt: ;0 x-mozilla-html: FALSE version: 2.1 end: vcard --------------5E5812284F3C3E59EE0F95C7--