Re: [pcp-discuss:] Goal-directedness, control and freedom

From: Alexei Sharov (sharov@VT.EDU)
Date: Fri Mar 02 2001 - 16:53:50 GMT

  • Next message: Margeret Heath: "Re: [pcp-discuss:] Goal-directedness, control and freedom"

    Francis,

    >For me "control" or "goal-directed behavior" CAN be aimed at continuing
    >improvement, without any fixed end-point. I have tried to explain this in
    >the paper by noting that a goal can be defined in such a way that it
    >encompasses continuing progress or change. Originally, the passage
    >explaining this was a little longer, but because of space constraints
    >Cliff shortened it. My point is that a goal is indeed defined as a
    >preferred state (or more generally set or fuzzy set of states), but that a
    >system's state is fundamentally a distinction, and a distinction is a
    >relation, not an independent "state of the world". The fundamental
    >distinction is between "better" and "worse". Whenever a system can make a
    >distinction between "better" (a situation it prefers) and "worse" (a
    >situation it would rather avoid), you can say it is goal-directed in the
    >most general sense.

    It seems that we mean the same thing but call it different names. You
    prefer "goals" and I prefer "values". In fact, I also use the term "goal" as
    a special case of values with a fixed end-point (according to subset of
    characteristics). Tell me, why don't you like the term "value"? When a
    system evaluates its states in terms of "better" and "worse" this should
    be called "value". Apparently you have some negative feelings about this
    term and I would like to know them.

    >The heat-seeking missile is an elementary example, where the missile will
    >prefer any move that brings it closer to its target, while the target
    >itself may move in the most dynamic and irregular fashion. The missile
    >still has an "end-state" in that once it has reached the target, its
    >activity stops.
    >
    >Wouldn't you call such activity "goal-directed"? Although the goal here is
    >purely dynamic, and most people think spontaneously about static goals
    >when they hear the word "goal", I don't really see a better term. Or does
    >anybody know of a term that describes behavior with an in-built preference
    >for certain outcomes over others, but no in-built end-state?

    To me goal-seeking is a higher level behavior than in-built preference.
    In-built preference does not have anticipation (modeling) of future states.
    It simply invokes acceptance/rejection reaction. Goal seeking should
    involve modeling of future states. If the heat-seeking missile uses modeling
    (I expect it does), then it is a goal-seeking behavior. But if no modeling
    involved, then I would not call it a goal.

    >Shann also raises the issue of control without feedback. As we have tried
    >to make clear in the paper, thinking that you can control something
    >without feedback is a delusion. In the short term you can establish some
    >kind of a command or dominance merely by buffering and feedforward, but
    >neither mechanism is perfect, and the errors they let slip through will
    >accumulate until you end up with an "error catastrophe", i.e. the system
    >has deviated so far from its ideal state that it gets destroyed.

    I agree. As a biologist, I prefer to use the term "perception" instead of
    feedback. By perception I mean not just detection the state of the
    environment but also detecting (and classifying) agent's own state. Primitive
    agents do not distinguish between internal and external perception

    >Alexei:
    >> I view control as an ability of an agent to change its behavior. Neither
    >>deterministic nor stochastic systems are agents because they have no
    >>control of their behavior. Watt's regulator is not an agent, and it has no
    >>control of its behavior. The pressure in the tank is regulated but there is
    >>no control here. It is an engineer who has control of Watt's regulator, and
    >>he has a goal of maintaining the pressure.
    >
    >Implicit in your definition of control I do see some form of goal. If you
    >say that an agent can change its behavior, you implicitly assume that the
    >agent has some intention to change, since you a priori exclude
    >deterministic or stochastic systems, of which I can find many examples
    >that do change their behavior, although they may not "want" to do it.

    I agree.

    > Watt's regulator does not have control over its behavior, I agree. But I
    > would say that it has control over the behavior of the steam engine that
    > it is regulating, because it can change that behavior guided by its
    > in-built goal. It doesn't have control over this goal, though, and
    > therefore its behavior from the outside can be seen as deterministic. But
    > if you would perfectly know all the goals that steer the engineer's
    > behavior, you might claim that the engineer too is behaving
    > deterministically, and does not have any control.

    There is a quantitative difference between Watt's regulator and the
    engineer (in terms of complexity). From a pragmatic point of view, it is
    easier to understand and predict the behavior of Watt's regulator from
    knowing exact processes that make it work. But knowing processes in
    the head of an engineer (imagine we know all of them) will not make it
    easier to understand and predict his behavior.

    >I am starting to suspect that the whole discussion about "free will" or
    >"freedom" is so confused because these concepts only make sense RELATIVE
    >TO A GOAL, while this aspect is completely ignored in the traditional
    >discussions that merely oppose determinism and indeterminism (stochasticity).

    Here I agree with you entirely! The same is true about discussions on
    what is life. Depending on goals, the same system can be considered
    living or non-living.

    >I have always thought that (in)determinism is a red herring, since the
    >world in practice is always partly predictable, partly unpredictable. "In
    >principle" predictability, like in Laplace's view of the universe, has no
    >meaning whatsoever in concrete situations.

    Some times we better use deterministic models, some times it is better
    to use stochastic models, and in some cases we consider a system to be
    an agent (it's just the next step in our models). We can go further and
    select an appropriate level of agent model. Some agents are simple
    (e.g., use perceptions and no modeling), other are very complex (involve
    language as a modeling media).

    >When we speak about "freedom" in practice, we mean "control", and as I
    >have argued "control" means the ability to do what you WANT to do, i.e.
    >act upon things according to your own goals or intentions rather than
    >according to the constraints imposed by the environment. Without goals,
    >you wouldn't have any preferences, and therefore you will merely drift
    >along (stochastically or deterministically), following the push and pull
    >of your environment, without any directed intervention.

    I see! Yes the word "control" may mean 2 different things:
    (1) ability to change something, i.e. to have control over something
    (2) to be constrained by another system.
    Is there a way not to confuse these 2 meanings?

    -Alexei

    -------------------------------------------------
    Dr. Alexei Sharov Research Scientist
    Dept. of Entomology, Virginia Tech, Blacksburg, VA 24061
    Tel. (540) 231-7316; FAX (540) 231-9131; e-mail sharov@vt.edu
    Home page: http://www.ento.vt.edu/~sharov/alexei.html

    ========================================
    Posting to pcp-discuss@lanl.gov from Alexei Sharov <sharov@vt.edu>



    This archive was generated by hypermail 2b29 : Fri Mar 02 2001 - 16:58:45 GMT