Re: Free will

John J. Kineman (jjk@NGDC.NOAA.GOV)
Fri, 16 Jan 1998 16:28:11 -0700


A while back (September) I wrote a number of comments in response to the
discussions about free will. These messages did not appear in Bruce's
archive, so I'm assuming they got caught up in one of the listserver
crashes that occurred about that time.

Anyway, I did not want the comments to be lost, so I have collated and
edited them, referencing the original message they are in response to as
best as I could. It is unfortunate that they are now out of sequence with
other responses, but I hope they contain some useful ideas regardless. Some
recent thoughts (on re-reading the messages) are incorporated in brackets [].

Here's an index of the comments:

I. Response to Walter Fritz on a USEFUL concept of free-will:
II. Comment to Don Mikulecky on his reply to Walter Fritz (on useful
concept of free will):
III. Response to Alexei Sharov on explaining will:
IV. Response to Nathan Lauster on relation to Quantum Mechanics:
V. Response to Don Mikulecky's response to Alexei Sharov on "can a system
with free will decide to not have free will?":
VI. Further comments on "can a system with free will decide to not have
free will?":
VII. Response to Bruce Edmonds on degrees of free will:
VIII. Response to Nathan Lauster on pragmatic use of "free will":
IX. Response to Francis Heyleighn on coherent meaning of FW and if animals
and babies have free will:
X. Comment to Alexei Sharov on conscious computers:
XI. Comment to Arno L. Goudsmit on complementarity paradox:

----------------------------------------------------------------------------

I. Response to Walter Fritz on a USEFUL concept of free-will:

Hi Walter,

I am concerned with going beyond this kind of approach if that is possible
(in may not be, in which case yours is the "useful" one).

My concern is the assertion that the "functions" are the same, and that a
living organism can act only based on stimuli and memory. This seems to be
at the heart of the matter of "free will." If we use the words
"innovation" or "creativity" as part of the free-will idea for organisms,
does it not mean something must enter the equation that cannot be
calculated? Admitedly we are loath to do this because it leaves the door
open for all kinds of suppositions about the source of such information.
But my reading of the philosophy of science since the quantum paradigm
suggests that we are inescapably in that kind of universe, and thus all
explanations and especially those concerned with living systems, must
incorporate ontic uncertainty in a formal and "causal" way. I place
"causal" in quotes, because this usage steps somewhat out of the context we
formerly had for defining the word - i.e., cause-effect mechanical
linkages. We are philosophically out of that box, I think. I can ingor (for
the purposes of science) the theological interpretations of this possible
window of opportunity for other-worldly cause by believing that it is
simply unknowable scientifically (i.e., any possible super order existing
in what appears to us to be quantum uncertainty) [but I cannot ignore that
something is going on here that is more than we can capture in the
objective view]. The hidden dimension models for QM are then no more valid
than mythology or theology, and vice versa, I suspect.

Getting back to your point, however, I tend to believe the model you
present, except for the subtle (and secondary) consideration that something
entirely unpredictable, yet involving some kind of order that is not
apparent in the experience/memory set, can enter the choice. If we further
consider that all biological systems are evolutionary, and evolution
magnifies the effect of extremely subtle changes, then "free-will" in the
sense of an innovative/creative component can eventually manifest major
physical results [through its effect on environment and selection]. Present
evolution models are mechanical in that they would treat this kind of
phenomena as the result of pure randomness - i.e., a small random variation
becoming self-reinforcing. But our own experience seems to tell us that we
can introduce something entirely new with intention. We do not have the
sense that a major life changing choice, that may be completely underivable
from anything reasonable, is in fact purely random. We may in many
instances be setting off to create something that has not existed before,
and doing so intentionally. I guess my question is how do we account for
the intention in such a case?

>Some people take it for granted that we have a free will and some say
>that all is predetermined.
>It would be well if we could clarify this concept of "free will". By
>clarifying I mean buildig up a USEFULL concept of free will and one
>that corresponds with experience.
>My contribution would be the insights gained while working with
>artificial intelligent systems.
----------------------------------------------------------------------------

II. Comment to Don Mikulecky on his reply to Walter Fritz (on useful
concept of free will):

I have similar reservations about applying a mechanical model to life - I
think we are philosophically "out of that box." (see earlier comment on QM).

However, I DO think we are engaged in studying the analogies that Walter
discusses. Is it not precisely PCP's purpose to make the "comparison of
human thought with artificial neural networks"? And thereby to clarify the
similarities AND hopefully the differences? [I think it is reductionism
only if one believes that human thought can be fully captured in a
mechanical view and derived from mechanical "reality." I think we agree
that it cannot, but I am also curious about how much of this we can mimic
using suitable models. By "useful" I interpret Walter's comments to mean
just this sort of modeling or mimicing of reality, hopefully not, as you
say, a reductionistic statement that the model is the reality.]

>This is a multi-part message in MIME format.
>--------------FF8DBD9D02DFD1415308BC02
>Content-Type: text/plain; charset=us-ascii
>Content-Transfer-Encoding: 7bit
>
>Don Mikulecky http://views.vcu.edu/~mikuleck/
>replies:
>
>The comparison of human thought with artificial neural networks seems like
>the worst form of reductionism! Clearly you don't think this line of
>speculation has any chance of going anywhere?
>
>The issues involved in the question of free will are not ones which lend
>themselves to this kind of mechanistic thinking. I recommend stronglly
>that you look at some of Robert Rosen's writings on the mind/brain
>problem. (see the Rosen bibliography at
>http://views.vcu.edu/complex/
>Respectfully,
>Don Mikulecky
>
>Walter Fritz wrote:
>
>> Some people take it for granted that we have a free will and some say
>> that all is predetermined.
>> It would be well if we could clarify this concept of "free will". By
>> clarifying I mean buildig up a USEFULL concept of free will and one
>> that corresponds with experience.
>> My contribution would be the insights gained while working with
>> artificial intelligent systems.
>>
>> Lets review the mechanisms involved in artificial Intelligent Systems
>> (IS's). (In biological IS's the FUNCTIONS are equivalent but the
>> MECHANISMS are different, because neural fields perform the
>> functions.)
>> In IS's there is a choice. But its past experience limits this choice;
>> in other words, the IS can only choose from those responses (those
>> response rules), that it learned and that therefore exist within its
>> memory. These response rules are in ths shape of: "situation to which
>> it is applicable" -> "action to do".
>> But the choice is more limited than that. Because, of all the existing
>> response rules, the IS puts only those into the "short list", that
>> have (in their situation part) some element in common with the present
>> situation. (The "short list" is a first selection or pre selection).
>> For instance it will not include a response rule pertaining to
>> electricity when looking for the right movement for swimming. No
>> elements of the present swimming situation exist in those response
>> rules on electricity.
>>
>> Once the artificial IS has established the short list, it evaluates
>> all included response rules: How much of the situation part of a
>> response rule exists in the present situation? Are there elements in
>> the present situation
>> not covered by the response rule? Does the response rule have elements
>> not existing in the present situation? What was the past success of
>> the response rule? And so on. Then it makes a choice, at RANDOM, but
>> weighted by this evaluation. It chooses the more likely response rule
>> more often and the less applicable response rule seldom. So a choice
>> exists, but it is a weighted, random choice and past experience limits
>> it. (If this is what you mean by "free will", then the artificial IS
>> has "free will". )
>>
>> What is the corresponding function in the biological IS? Again there
>> are response rules (experiences of acting in a given situation),
>> encoded in the
>> neural fields. Also here the choice is by weighted chance; the same
>> person, in the same situation, does not always choose the same
>> response. Sometimes he or she tries something else. Also here the
>> choice is limited by experience. A person not knowledgable of
>> electronics cannot "chose" the correct condenser for a circuit.
>>
>> So there is a choice, but the Intelligent System, artificial or
>> human, makes the choice by weighted chance, based on experience.
>>
>> This seems to me a possible content of the concept "free will".
>>
>> Comments and corrections are welcome.
>>
>> Walter Fritz
>

----------------------------------------------------------------------------
III. Response to Alexei Sharov on explaining will:

Yes! I agree with this. The problem is in explaining will (or "intent" as I
called it in an earlier comment).

Nice web page on biosemiotics.

>Although I agree with Don Mikulecky's response on the message
>of Walter Fritz, I don't think that it is possible to
>close discussion by referencing to Rosen. Because Rosen
>did not provide an ultimate solution for the problem.
>
>Walter Fritz suggested a working definition of free will
>as ability to select among several available options
>using given information on expected outcomes. If several
>options do not differ in expected quality of the result,
>then further selection is random. From this
>definition it follows that computer-based Intellegent Systems
>(IS) have free will.
>
>I don't think that this definition of free will is satisfactory
>because of the following.
>
>1. Initial conditions of the system is a computer with a working
>IS-software plus information on expected outcomes from various
>options. Given these initial conditions, the final state will be
>either pre-determined (if information is sufficient) or random
>with a specific probability distribution. In both case, IS behavior
>does not differ from the behavior of all other physical objects.
>Then all physical objects, including tossed coins have a free will.
>
>2. It is more interesting to separate initial conditions into 2
>portions: (a) computer with a working IS-software and (b) information
>on expected outcomes. Then, information will narrow the choice
>which means that decision was not determined by portion (a). This
>looks like a free will. But this approach is vulnerable because there
>is no good criteria for separating portions (a) and (b) in initial
>conditions. For example, we may consider coordinates of a rock as
>portion (a) and the impulse as portion (b). Then, we are in trouble
>again because the rock has a free will. Its coordinates do not tell
>us where it will move until we know the impulse.
>
>3. The problem of the free will I see in the WILL rather than in
>freedom. There is plenty of freedom in the physical world. If
>there were no freedom, we would not be able to change a thing.
>The problem is to understand the origin of the will. For me
>this problem coincides with the origin of subjectivity and
>the origin of life.
>
>4. Free will has a semiotic nature. It is related to the
>arbitrariness of a sign. Only living organisms are able to
>interpret signs. At this point, computers can only manipulate
>with signs, and interpretations is the task of people. But it
>is possible that computers will be able to interpret signs in
>the future. Then, we will probably consider them 'alive'.
>
>Relationship between life and signs is the area of
>BIOSEMIOTICS. If anybody is interested, you are welcome to
>visit my home page on biosemiotics at:
>http://www.gypsymoth.ento.vt.edu/~sharov/biosem/welcome.html
>
>Recently I submitted a paper on biosemiotics, which is the
>review of Hoffmeyer's book. If you are interested in pre-prints
>please send me your requests and I can send it to you in the
>MS-Word 6.0 format.
>
>-Alexei

----------------------------------------------------------------------------
IV. Response to Nathan Lauster on relation to Quantum Mechanics:

I'll take a crack at this, although I'm sure there are others more
qualified in the QM arena.

My perception of the debate on free-will and QM is that there currently
exists NO explanation of "the measurement problem" that does not leave the
dimensions of a classically describable world, i.e., 3 space and 1 time
dimension. If our basic physics cannot be explained within classical
determinism, then surely it is reasonable to suggest that more complex QM
phenomena emergent within correlated matter (macroscopic QM as Hammeroff
and Penrose describe and suggest may exist in microtubules) may also
require extension beyond worldly dimensions regarding "will" or
"intention." [This is my interpretation - I actually see it as direct
"experience," which appears to us as "will" because of our temporal
perspective.] Hammeroff and Penrose refer to "OrchOR" which is
Orchestrated Objective Reduction (the quantum "collapse"). My
interpretation of "orchestrated" is that it implies just this - the
possibility of will being external to our four-dimensional world [or more
accurately, the thought of existence - pure experience of existence -
entering our space-time world]. Now, I think it is a philosophical matter
when one thinks of these external dimensions as "mechanical" or "organic"
or even spiritual. They ARE outside the world which is available to us for
measurement. They ARE untestible [from an objective frame of reference].
They are conceptual models to explain what we CAN see. We could call them
gods and I'm not sure there would be much difference. We each will
construct these external dimensions in the image of our own theories - to
explain what we see and experience.

But back to the point -- I think it is fairly inescapable at this point
that history and randomness are not the only requirements [i.e., sources]
for cause - they are the only parts we can measure.

>I'm uncertain what the distinction of "free will" gains us.
>
>The ability to "select among several available options
>using given information on expected outcomes" seems to me to create a
>useful distinction between model-utilizing entities and simple reactive
>entities, but I do not see any form of escape from determinism on a larger
>scale. Whether filtered through models (decision processes) or not, all
>actions are historically based upon causal principles. Randomness in the
>system typically seems attributable to causal principles on lower
>hierarchical scales (i.e., randomly made decisions may be caused by
>processes which take place at neurological level).
>
>I am uncertain whether or not random activity will truly be found at the
>quantum level, but surely subjecting decision processes to the random
>level indicates no greater "free will" then subjecting the decision
>process to historical (i.e. causal and deterministic) circumstances.
>
>Why concentrate on free will when the only two forces I can think of
>contributing to any phenomena are historical or random?
>
>In curiosity (did I spell that right?),
>-Nathan Lauster
>

----------------------------------------------------------------------------
V. Response to Don Mikulecky's response to Alexei Sharov on "can a system
with free will decide to not have free will?":

A very interesting question here. I think that is precisely what many
determinists have done [i.e., "decided not to have free will"], and I think
we have also seen examples of this phenomenon at the societal level. My
answer is, a system with free will can decide not to exercise free will,
but will probably always have the ability to reverse that decision (another
paradox).

>On the contrary, the reference to Rosen should open discussion. for
>example:Rosen tells us about the "impredicativity" of the mind, the
>existence of self-referential loops. Thus we have the question:
>"Can a system with free will decide to not have free will?"
>Respectfully,
>Don Mikulecky

----------------------------------------------------------------------------

VI. Further comments on "can a system with free will decide to not have
free will?":

> >"Can a system with free will decide to not have free will?"
> Of course! This is the freedom of committing cuicide either
> temporarily (e.g., going to sleep, to vacation...) or
> permanently (using guns, ropes, poisons, etc.).

sorry, but that's too easy!...what about to go on existing without free will?
the self referential tangle here is not escapable.

In the response I gave [above], I implied that the abandonment of free will
IS possible to approach with continued existence. Granted the subsequent
paradoxes remain, but this is inconsequential if one may approach the state
of non free-will infinitely closely. Also, there is no reason to believe
that it cannot be achieved completely, as some religions believe. It is
certainly untestible, but would equate with the "spiritually dead" which
some believe can exist for eternity, or is otherwise described as oblivion.

In any case, the point of this (I hope) is not to debate such theologies,
but to see how well we can approximate the reality we experience and
observe with [using] models.

----------------------------------------------------------------------------

VII. Response to Bruce Edmonds on degrees of free will:

Yes, this is what I was suggesting in response to Don M.'s question about
deciding not to have free-will. I think it is a matter of degree, both in
the extent to which it can be shut down intentionally (as in human
examples) and the degree to which it has evolved in different organisms.
This is consistent with the idea that it has evolved from primative origins
(perhaps describably in QM terms), which is the subject of a thought piece
I revised for the web recently, under the title "autevolution."

http://www.bayside.net/NPO/EDI/autevol.htm

Comment referenced from: Bruce Edmonds
>5. A pragmatic approach to free-will gets around many of the
>philosophical problems associated with it. The key question becomes
>when is it useful to attribute it to systems and when not. The
>differing degrees of usefulness means that FW is not an all-or-nothing
>concept.

----------------------------------------------------------------------------

VIII. Response to Nathan Lauster on pragmatic use of "free will":

Agreed that the term carries a tremendous lot of baggage, but is it really
most evocative of a defense of humanism? It would seem that the process and
value of human spirit and reason ( humanism?) are not much under attack
here, but supernaturalism or spiritualism, as one attempts to explain the
origins of humanistic phenomena. The idea of free will to many involves
spirit which is rather difficult to define or quantify (your point). I
don't think changing the terms will help this much if we are really trying
to understand and model the whole phenonenon.

>I appreciate the pragmatics of using "free will" as a term to explain
>differences in internalized processes (processes which result in more
>random and subjectively unpredictable behavior). However, might I suggest
>that the term "free will", originating, I believe, in decidedly
>philosophical-religious contexts, carries alot of baggage with it. It
>will probably ALWAYS be associated with some vague defense of Humanism.
>Perhaps another term might make the distinction more meaningful in a
>behaviorist fashion?
>
>Best Luck,
>-Nathan Lauster
>

----------------------------------------------------------------------------

IX. Response to Francis Heyleighn on coherent meaning of FW and if animals
and babies have free will:

Here is a selection of comments I wish to respond to, with my comments
inserted (JK). Let me preface this by saying that I understand that a
distinction between FW as a fundamental quantity and FW as an elaboration
of complex systems is what underlies Francis' comments - but I dont' think
the distinction will ultimately be supportable and I think it defnitely
leads to some questionable conclusions.

In this issue of a pragmatic system-level definition for FW, I find myself
agreeing in principle with Don M.'s earlier reply: He writes: "Yes, by
reducing the system to one which traditional positivist methods can handle
you will succeed in stripping it of everything meaningful and interesting.
This is what the science wars are all about!" --- Although I think modeling
can be meaningful and interesting (at lease for job security) even though
it does seem to require this sort of epistemological "stripping" Don refers
to. Nevertheless, this is certainly the point, and I think the following
may help demonstrate the probelm:

>Bruce Edmonds:
>>4. There are grave problems with identifying a coherent meaning of FW
>>in an absolute sense, since there seems to be no way, even in principle,
to check for the existence of FW (other than metaphysics). The events of
the universe seem equally explicable with and without FW.

Here is where the problem starts. While I too agree with Bruce's pragmatic
approach, I would caution against excluding metaphysics as an attempt to
understand experiential reality. When one includes psychological experience
and studies it metaphysically (i.e., looking for the underlying reality),
the universe is definitely not equally explicable with and without FW (of
course, it is not explicable at all, but that's part of the problem).
Again, as an overworked example, but nevertheless sufficient I believe to
disprove Bruce's assertion of equvalency of explanation, QM became
necessarily metaphysical and, for some at least, required the definition of
"observer" (I argue elsewhere that alternative explanations don't work).
Implicit (and the crux of the problem) with "observer" was the notion of
free will (if nothing else, the decision to observe). So I do not think we
can claim equivalent explanation with and without FW.

I think the biggest problem in assuming such equivalency of explanation is
demonstrated in the construction described by Dr. Heylighen, following from
that assumption.

>>
Francis Heylighen (citing Bruce)
>
> ... determinism is a red herring .. epistemological limitations have
>made it clear that we will never be able to build complete predictive
>models of the world

JK: Completely agree here - that is my point above too.

>.... Practically, the world is indeterministic. Yet,
>this is different from stating that it has "free will".

JK: It is a different statement, but I don't think anyone has yet clarified
the difference, and in the end it may not be different if we are speaking
of the origins of free will.

>2. "Free will " is typically associated with conscious human beings..

JK: OK, but then be extremely careful when we generalize from the typical
to the general. As I pointed out in an earlier comment, since free will is
essentially experiential, it is not observable as such, and we thus only
know it from the human perspective and from observation of its results
(which MAY be indistinguishable from other phenomena). This means that we
cannot "know" of it in other forms, but must make assumptions.

>. rational cognition is based on verbal language, where the rules of
syntax allow the generation of an infinite number of combinations of
conceptual units or words.

JK: I think this is definitely over-specific to the human case. That much
at least is demonstrable with non-verbal Chimps and Gorillas But beyond
that, Is it really provable that coggnition requires language and
symbology as a general postulate? If so, does it follow that free will
necessarily requires symbolization? Or is this meant to refer to the human
case, and thus is useful for comparative purposes only?

> These different combinations represent different states of affairs, from
which the "will" can select one as being the preferred one.

JK: This raises a further level of questioning, I think. I have noticed
this as an apparent basic assumption among most of the PCP material and I
assume it is accepted by most participants. But are we saying, by
definition, that this is the only thing "will" can do? Why, if creative
processes exist at the most fundamental level fo QM, bringing matter into
manifestation from "undefined potential" cannot "will" bring into being a
new thought or idea that is not a strict derivative of pre-existing
options? Is this not otherwise a return to deterministic thinking? (i.e., a
red Herring?)

>"Will" itself does not need to be anything more than a learned or inherited
>selection criterion or selector. The essence is the generation of
>alternatives or potentialities.

JK: OK, here I think the idea of "generating" new alternatives is included,
but somehow distinguished from "will" which does the selecting. I don't see
necessitiy of the distinction, but it may be useful for modeling purposes.
I think that in "reality" (i.e, interpretation of my own experience), a
decision occurs in the present moment and can include both the choice and
the new possibility - i.e., I can decide to do something with completely
unknown results and conditions, but it would set in motion a learning
process which is then involved in bringing the "new thing" into
manifestation. Others, of course, will say this is an illusion, that the
"new thing" existed before my decision and I only decided to discover it.
But there is no evidence that it is an illusion and there is experiential
evidence that it is not, as well as strong analogies at the QM level. The
two are equivalent ONLY if we limit knowledge to what is observable, and
dont' include the process of observation itself.

>This is where human cognition differs from
>animal cognition. Animals do have a "will": they can choose among
>alternatives, but the alternatives must be given to them by the
>environment. They cannot imagine situations they have not encountered
>before. In that sense, animals have no (or Bruce might say "less") free
>will.

JK: So then, this is the endpoint of what I have tried to show is an overly
limited line of reasoning. We conclude that we are fundamentally different
from OTHER animals (we are animals) based on our foundation of science in
observation to the exclusion of experience (because it is humanly biased),
limit the phenomena of experience and originality only to the human case
(because that's the only place we can't deny it), then conclude that in
qualities we can't observe in other animals, we must be special. This is a
"true by definition" paradigm, and we should be careful not to believe in
it other than as a modeling exercise (if that).
>
>>2. Therefore (if it exists at all) anything corresponding to a
>>meaningful conception of FW emerges during our development from an egg
>>to our adult form.
>
>Although the present simplified theory of Free Will seems to be of the "all
>or none" type, I agree with Bruce that in practice there is some continuity
>involved.

JK: This is more to the point, I think. It is more than just "some
continuity" I suspect. It must surely extend throughout the full range from
fundamental matter to humans.

> Since free will in my view depends on the learning of language,
>that is, conceptual units and syntactical rules to combine them, the more
>units and the more rules an individual knows, the larger the number of
>potentialities that individual can consider and thus the larger the variety
>from which (s)he can choose.

JK: Here I think it is clarified that you are discussing not free will, as
a fundamental thing, but human reasoning and thinking. Defining it this way
makes the earlier remarks reasonable (except for the generalizations).
However, lets not repeat the errors of previous generations who constantly
tried to find the distinction between humans and everything else for
religious purposes. If our religion has now become explanation, we may be
doing the same thing (as Don M. suggests).

>In that sense, the free will of a baby is virtually zero, the one of a
three year old relatively small,
>and the one of an adult rather larger, depending on the general level of
education and
>intelligence. Historically, it seems likely that people in primitive
>societies (say prehistory or farmers in the Middle Ages) would have had
>less "free will" than we have.

JK: Here we reach completely opposite conclusions (thus there IS a
difference in explanation & prediction "with or without free will."). Those
who seek spiritual experience in meditation, for example, or perhaps even
drugs, feel freedom and free will - the experience (or illusion, as some
claim) is heightened without question. Similarly many believe babies and
primative societies are much freer that we are with all our inhibitions,
fears, and obsessions - many learned. Slowing down, thinking less
obsessively and being attentive to the moment is generally associated with
freedom in these traditions. This leads me to believe there really are
multiple and fundamentally different definitions being used, but I think
what Heylighen is referring to is human reasoning, not free will.

----------------------------------------------------------------------------

X. Comment to Alexei Sharov on conscious computers:

I have the same objection to applying this definition to non-humans (or
babies) and I think the problem goes quite deep - see other comments [above].

Just thought I'd add a purely speculative and fanciful idea about the
future of computers -- I suspect that computers will need to be sensitive
to orchestrated quantum phenomena (to use Hammeroff and Penrose terms), the
rudiments of which may have been recently demonstrated in lab experiments
that succeeded in producing a Bose-Einstein condensate (macroscopic quantum
phenomena). One of the predictions of this new technology is the
possibility of creating collated matter beams (like lasers). I wouldn't be
surprised if it also leads to conscious computers, though not directly, I'm
sure..

>Francis Heylighen wrote:
>>2. "Free will " is typically associated with conscious human beings, who
>>can consider different alternatives and choose the one they prefer. This
>>assumes the presence of a "rational" or "conceptual" mode of cognition,
>>where the mind can conceive different possible states of affairs which do
>>not necessarily exist (or have existed) in reality. Such rational cognition
>>is based on verbal language, where the rules of syntax allow the generation
>>of an infinite number of combinations of conceptual units or words. These
>>different combinations represent different states of affairs, from which
>>the "will" can select one as being the preferred one.
>
>I agree with most of this, but I think that "free will" should not
>be applied exclusively to humans. Language is really important,
>but all living organisms have "genetic language" which also can
>produce new combinations of signs. Animals can learn new habits
>during their life, some times very complicated. All this is the
>area of biosemiotics
>(http://www.gypsymoth.ento.vt.edu/~sharov/biosem/welcome.html)
>I am not sure that the term "free will" is good because it is
>too heavy loaded with anthropomorphism and humanism. Jesper
>Hoffmeyer suggested the term "semiotic freedom" which I like better.
>
>The question is whether comuters can have semiotic freedom?
>Walter Fritz pointed out that IS can develop simple notions
>and combine them which looks like elements of semiotic freedom.
>Computers can find a suitable solution for a given problem, but
>they are unable to generate new problems (they have no will).
>This is because computers have no internal problems in contrast
>to living organisms which are involved in self-production cycles.
>
>But in the future the situation may change. If computers get
>more freedom in communicating with each other they may become
>directly involved in human economy. When a computer earns
>money, it may use a portion of it to hire humans for reapairs
>or upgrades. May be owning a computer will become as unethical
>as owning a slave.
>
>-Alexei

----------------------------------------------------------------------------
XI. Comment to Arno L. Goudsmit on complementarity paradox:

So it always comes down to this basic complementarity paradox ( I too cited
Bohr extensively in referring to this problem because he seemed to be the
most focused on the central meaning of this question with respect to QM).
Yet, how then are we to build a model and/or system--if I am correct in
assuming that is what PCP is primarily aimed at-- that behaves like living
systems - or is alive - if we do not speculate beyond observation? The
heart of the matter to me is experience. We have it, certainly (no sense at
all in denying that), I must assume it evolved along with form [substance
and physical structure - usage of the term has changed since Plato's time]
somehow, and thus exists in all other life to some degree. And at the most
basic level, folks like Bohr, Schroedinger, Wheeler, Penrose, etc. seem to
beg the question if there is really any reason to believe that quantum
observership is different from rudimentary experience. Thus (in this view)
observership IS experience, and it is a property of the universe. But since
we can't observe it (it IS observation, and so can't observe itself) we can
only experience it. Now, modern science has been based on observation (or
at least formally so). Hence, we don't have definitions for presumed "real"
quantities that are primarily experiential. Is there any way to avoid the
need to define purely experiential (and non-objective) elements in the
model? Of course I don't know how to do this so the arguement IS
philosophical as Tom Abel said, but nontheless, I think, important.

>Date: Thu, 11 Sep 1997 15:14:58 +0000 (WE)
>From: "Arno L. Goudsmit" <Goudsmit@MAILBOX.UNIMAAS.NL>
>Subject: free won'
>To: prncyb-l%bingvmb.bitnet@HEARN.nic.SURFnet.nl
>
>Arno Goudsmit:
>I think the beautiful thing with artificial things is that you
>can give them names as you like it. E.g. you may call them
>intelligent, attribute them experiences, learning processes, etc.
>This is all a matter of free will of the student, who is free
>to define his terms (with the only constraint of consistency).
>However, to call a living being free, attribute to it experiences,
>learning processes, call it intelligent, etc., is something
>entirely different: for a living being has its own perspective,
>point of view or what you want.
>
>If there is some free will etc. in a living being, then this is to
>say that it does NOT fit into an observer's definitions, but instead
>imposes upon the observer some facts of life (and of living) that
>may not be consistent with the formalism the observer is cherishing.
>This is not a limitation of the observer, it is due
>to a complementarity between observation and definition, about which
>Niels Bohr has been quite explicit already.
>