[pcp-discuss:] Goal-directedness, control and freedom

From: Francis Heylighen (fheyligh@vub.ac.be)
Date: Thu Mar 01 2001 - 18:14:57 GMT

  • Next message: Shann Turnbull: "Re: [pcp-discuss:] Goal-directedness, control and freedom"

    Shann:

    >Thank you and Cliff for your paper which I found most informative.
    >On the first page you define the word "control" as "maintenance of a
    >goal by active compensation of perturbations".
    >
    >This definition means that control is dependent upon a goal being
    >known and the need for feedback communication. In social
    >organisations a goal may not be known or agreed upon and feedback
    >can be problematical or non existent. It is possible to direct,
    >command or order action on personal whims without an operational
    >goal or without the need to obtain immediate feedback on the
    >outcome. Regulation on the other hand requires feedback to allow
    >"active compensation of perturbations".
    >
    >From my reading of Ashby's introduction to cybernetics your
    >definition of control is how he uses the word "regulate".
    >
    >From your text your definition of control would also seem to mean regulate?
    >Do you identify any difference between these two words?
    >
    >Might it be useful, at least in the social sciences, to make a
    >distinction between the two words? That is between when a goal is
    >not known and/or feedback is not required and when feedback is
    >required?

    You are pointing out some very important issues, that are easily
    confused because the terms used are ambiguous. You note correctly
    that in our paper we use "regulation" and "control" more or less
    interchangeably. We tend to prefer the term "control", while Ashby
    tends to call the same phenomenon "regulation". Neither term is
    really ideal, since they both have connotations different from the
    phenomenon we try to describe. The problem with "regulation" for me
    is that it appears very conservative or static, merely suppressing
    variations or fluctuations from whatever the "normal situation" is,
    without positive aim or objective. The problem with "control" is that
    most people understand it as domineering, subjecting, having power
    over, again not necessarily with any positive aim or goal. The
    advantage of "control" is that it is more dynamic than "regulation"
    since you can also exert control by making somebody do something very
    "irregular". Moreover, "control" as a term tends to be more common in
    the cybernetics-related literature.

    To make clear that you also need a progressive force, aiming to
    achieve as yet non-existent situations, I have liberally used the
    term "goal-directedness". This has another disadvantage, though,
    namely that people tend to see a goal as a specific end state:

    Alexei:
    >A goal is a preferred state of the system. But societies and biological
    >populations have no preferred state. Nevertheless they can improve their
    >performance (adaptation and adaptability) without setting goals.

    For me "control" or "goal-directed behavior" CAN be aimed at
    continuing improvement, without any fixed end-point. I have tried to
    explain this in the paper by noting that a goal can be defined in
    such a way that it encompasses continuing progress or change.
    Originally, the passage explaining this was a little longer, but
    because of space constraints Cliff shortened it. My point is that a
    goal is indeed defined as a preferred state (or more generally set or
    fuzzy set of states), but that a system's state is fundamentally a
    distinction, and a distinction is a relation, not an independent
    "state of the world". The fundamental distinction is between "better"
    and "worse". Whenever a system can make a distinction between
    "better" (a situation it prefers) and "worse" (a situation it would
    rather avoid), you can say it is goal-directed in the most general
    sense.

    The heat-seeking missile is an elementary example, where the missile
    will prefer any move that brings it closer to its target, while the
    target itself may move in the most dynamic and irregular fashion. The
    missile still has an "end-state" in that once it has reached the
    target, its activity stops.

    A perhaps better example is when you have the (relational) goal of
    making ever more money on the stock exchange. This is a goal where
    you will never reach an end-state where you can stop. Yet, activity
    directed at this goal can be described by the basic cybernetic
    control mechanisms of buffering (keeping safe investments as a
    reserve in case things go wrong), feedforward (speculating on stocks
    that you think will go up) and feedback (selling stocks that have
    performed below your expectations).

    Wouldn't you call such activity "goal-directed"? Although the goal
    here is purely dynamic, and most people think spontaneously about
    static goals when they hear the word "goal", I don't really see a
    better term. Or does anybody know of a term that describes behavior
    with an in-built preference for certain outcomes over others, but no
    in-built end-state?

    Note that such preference does not even assume that you "know" what
    you prefer, as in the money-making example. Imagine that you are
    browsing through a collection of pictures (e.g. on the web, in an
    artbook or in a museum), and that there are some you like more,
    others you like less. You will act so as to spend a lot of time
    looking at the pictures you like, while you will quickly turn the
    page on the pictures you don't like. This is goal-directed behavior
    in my definition, although you would not in any way be able to
    formulate which kind of pictures you would rather see. Your goal is
    not so much "to see pictures A, B and C", but "to derive esthetic
    pleasure from the material". Since you cannot a priori say which are
    the pictures you will enjoy most, so that you could look them up in
    the index and go straight to the corresponding page (feedforward),
    your only control strategy is feedback: you go to the next page, and
    if you don't like the picture, you turn the page again (negative
    feedback), otherwise you stay and enjoy.

    Shann also raises the issue of control without feedback. As we have
    tried to make clear in the paper, thinking that you can control
    something without feedback is a delusion. In the short term you can
    establish some kind of a command or dominance merely by buffering and
    feedforward, but neither mechanism is perfect, and the errors they
    let slip through will accumulate until you end up with an "error
    catastrophe", i.e. the system has deviated so far from its ideal
    state that it gets destroyed.

    As Val Turchin explains in his book "The Inertia of Fear", this is
    the real reason why totalitarian systems such as Soviet communism
    eventually fail: because they lack error-correcting feedback,
    mistakes accumulate until the system is no longer viable. Of course
    this is a simplification: the Soviet system did have some feedback
    from the society back to the authorities (even the staunchest
    bureaucrats would notice that the 5 year-plans did not achieve their
    goals and therefore would attempt some corrections), otherwise it
    wouldn't have survived as longs as it did. But this feedback was
    severely deficient compared to the systematic feedback mechanism that
    forms the heart of democracy (ineffectual politicians are voted out)
    and of the market (products that are in demand increase in price,
    thus stimulating producers to supply more of them).

    In conclusion, the different cases that Shann and Alexei distinguish
    (with/without feedback, with/without goal) are for me not really
    distinct, but all part of the general phenomenon that I call control
    or goal-directedness. Sometimes goals are more explicit or dynamic,
    sometimes less, but in the phenomena we discussed there is always
    some "preference function" that would rather achieve one situation
    than another one. Feedback too can be more or less prominent, but any
    system that wishes to survive in the long term needs some form of
    feedback.

    Shann:
    >For example, Tannenbaum (1962: 5) defined 'control' as "any process
    >in which a person or group of persons or organisation of persons
    >determines, i.e. intentionally affects, what another person or group
    >or organisation will do". This definition provides a word/concept
    >to describe a situation where no standard of performance is required.

    This doesn't look like a good definition to me. You can affect people
    in all kinds of ways (e.g. I can kick somebody in the butt), but that
    doesn't mean you control them (e.g. my victim can kick back twice as
    hard). Control to me implies some kind of on-going state of
    "affecting", relative to an enduring goal. Note that the goal is
    anyway implicit in the word "intentionally" (what is an intention
    other but a goal?). The advantage of this "short-term" view of
    affecting, is that you might conceivably do it without feedback, but
    I would anyway not call this "control".

    >Can you or anybody else on this list provide references in the
    >cybernetic literature which makes these distinctions, and defines
    >the concepts and language, that I find useful to analyse the
    >information and control systems (cybernetic architecture) of social
    >organisations?

    I would like to see such references myself. In my experience, the
    literature is as confused in its terminology as our discussion here.
    In the end I might get tempted to invent a new Latin or Greek term
    (like "cybernetics" itself was invented, or more precisely
    re-invented, by Wiener).

    Alexei:
    >I agree with Shann that control can exist without a goal (unless we
    >stretch the meaning of a goal beyond its limits). I view control as
    >an ability of an agent to change its behavior. Neither
    >deterministic nor stochastic systems are agents because they have no
    >control of their behavior. Watt's regulator is not an agent, and it has no
    >control of its behavior. The pressure in the tank is regulated but there is
    >no control here. It is an engineer who has control of Watt's regulator, and
    >he has a goal of maintaining the pressure.

    Implicit in your definition of control I do see some form of goal. If
    you say that an agent can change its behavior, you implicitly assume
    that the agent has some intention to change, since you a priori
    exclude deterministic or stochastic systems, of which I can find many
    examples that do change their behavior, although they may not "want"
    to do it. Watt's regulator does not have control over its behavior, I
    agree. But I would say that it has control over the behavior of the
    steam engine that it is regulating, because it can change that
    behavior guided by its in-built goal. It doesn't have control over
    this goal, though, and therefore its behavior from the outside can be
    seen as deterministic. But if you would perfectly know all the goals
    that steer the engineer's behavior, you might claim that the engineer
    too is behaving deterministically, and does not have any control.

    I am starting to suspect that the whole discussion about "free will"
    or "freedom" is so confused because these concepts only make sense
    RELATIVE TO A GOAL, while this aspect is completely ignored in the
    traditional discussions that merely oppose determinism and
    indeterminism (stochasticity). I have always thought that
    (in)determinism is a red herring, since the world in practice is
    always partly predictable, partly unpredictable. "In principle"
    predictability, like in Laplace's view of the universe, has no
    meaning whatsoever in concrete situations.

    When we speak about "freedom" in practice, we mean "control", and as
    I have argued "control" means the ability to do what you WANT to do,
    i.e. act upon things according to your own goals or intentions rather
    than according to the constraints imposed by the environment. Without
    goals, you wouldn't have any preferences, and therefore you will
    merely drift along (stochastically or deterministically), following
    the push and pull of your environment, without any directed
    intervention.

    _________________________________________________________________________
    Francis Heylighen <fheyligh@vub.ac.be> -- Center "Leo Apostel"
    Free University of Brussels, Krijgskundestr. 33, 1160 Brussels, Belgium
    tel +32-2-6442677; fax +32-2-6440744; http://pespmc1.vub.ac.be/HEYL.html
    ========================================
    Posting to pcp-discuss@lanl.gov from Francis Heylighen <fheyligh@vub.ac.be>



    This archive was generated by hypermail 2b29 : Thu Mar 01 2001 - 19:25:59 GMT