Thank you for such a considered and full response which is most
valuable. Both your response and that of Alexei Sharov reveal that there
appears to be little agreement on the meaning of the most basic
words/concepts used in the contemporary cybernetic literature.
I had previously assumed that it was the social scientists who lacked
agreement on the meaning of the word "control" . As words are the tools of
thinking and rigorous analysis it is not possible without unambiguous
language it becomes vital to identify how words can be misunderstood to
either develop the discipline of cybernetics or spread it into related fields.
I have been trying to introduce cybernetic principles to the very fadish
and rapidly growing topic of corporate governance by including its ideas in
my surveys on this recently developed discipline. My first survey was
published in the first academic journal dedicated to the topic in
'Corporate Governance: An international review', 5:4., pp. 180-205, 1997 to
celebrate is fifth year of publication. The survey was republished in the
'Corporate Governance' volume on the 'History of Management Thought', R.I.
Tricker, ed, Ashgate Publishing, 2000, London. A revised and updated
version can be downloaded from
http://papers.ssrn.com/paper.taf?abstract_id=221350 which was translated
into French to launch the first Francophone journal in the topic. In both
these surveys I assumed that the cybernetic literature used the word
control in the sense defined by Tannenbaum (1962: 5).
Stafford Beer asks "What is control all about?" on page 25 of his Chapter
on 'Concepts and Terms' in the 'Brain of the Firm' 2nd edition 1995. He
answers the question by stating that it is a "stimulus" which acts as "an
interference which affects the system's operations in some way" (p.26). No
goal-directedness is required and so it is consistent with the definition
of Tannenbaum. Beer explicitly discusses the self-consciousness of systems
and your concern about "freedom". Beer states (p.27) that "Typically our
thinking about control becomes muddled because we ourselves are very
advanced systems, and we introspect too much".
When I met with Stafford in 1996 and he reviewed another article of mine
that was later published in 'Corporate Governance', he suggested that I
should publish in journals dedicated to "Systems thinking". To do this I
will need to cite your contribution to Encyclopedia of Physical Science &
Technology (3rd ed.), (Academic Press) and the work of Ashby and Beer to
indicate how the word control can have different meanings. Can you please
provide me a full citation with page numbers and year of publication of
Francis states that "control without feedback is a delusion" and it is
exactly this point that I am trying to get across to corporate governance
scholars, company directors, government regulators, and public policy
advisors who have not heard of Ashby's law of requisite variety or the
impossibility of regulating complexity without supplementation. Many firms
fail because they exercise control in the Tannenbaum/Beer sense of the word
without relating commands to either constructive goals or
feedback. Feedback in centralized unitary command hierarchies are subject
to substantial informational problems as analysed by Downs 1967. This can
make government or private sector hierarchies in firms and elsewhere
subject to the problems Francis mentions that were noted by Val Turchin in
his book "The Inertia of Fear",
Deficient information systems can make firms uncompetitive and/or become
unprofitable because control is exercised incorrectly in decisions to make
or buy as noted by Alchian & Demsetz in 'Production, Information Costs, and
Economic Organization', American Economic Association, 1972, 62 (5)
777-95. Management may command action on a goal-directive basis to survive
but not have access to immediate feedback information as to whether their
commands/control actions are successful. There are a host of other
administrative commands that simply represent a "stimulus" to the
system. It is like when a child or adult plays with a new device to see
what happens. In a firm commanded activity may not provide immediate or
even any feedback information. This can make it impossible to make
corrections to allow variables to be regulated or in the words of Beer to
reduce variety. The problem is exacerbated by the unpredictability of the
responses of people to stimuli.
So please invent a new word as you proposed when actions are initiated
without goal directedness and/or without feedback information.
My research interest is in how to design firms, organisations and social
institutions in general that can achieve self-governance. To achieve this
objective social systems need to accept the contrary, changeable and
unpredictable responses of human behaviour to a given stimulus. So I would
very much look forward to your suggestions as to a suitable word to
describe the act of giving a stimuli to a system.
At 05:14 AM 2/3/2001, Francis Heylighen wrote:
>>Thank you and Cliff for your paper which I found most informative. On the
>>first page you define the word "control" as "maintenance of a goal by
>>active compensation of perturbations".
>>This definition means that control is dependent upon a goal being known
>>and the need for feedback communication. In social organisations a goal
>>may not be known or agreed upon and feedback can be problematical or non
>>existent. It is possible to direct, command or order action on personal
>>whims without an operational goal or without the need to obtain immediate
>>feedback on the outcome. Regulation on the other hand requires feedback
>>to allow "active compensation of perturbations".
> From my reading of Ashby's introduction to cybernetics your
>>definition of control is how he uses the word "regulate".
> From your text your definition of control would also seem to mean regulate?
>>Do you identify any difference between these two words?
>>Might it be useful, at least in the social sciences, to make a
>>distinction between the two words? That is between when a goal is not
>>known and/or feedback is not required and when feedback is required?
>You are pointing out some very important issues, that are easily confused
>because the terms used are ambiguous. You note correctly that in our paper
>we use "regulation" and "control" more or less interchangeably. We tend to
>prefer the term "control", while Ashby tends to call the same phenomenon
>"regulation". Neither term is really ideal, since they both have
>connotations different from the phenomenon we try to describe. The
>problem with "regulation" for me is that it appears very conservative or
>static, merely suppressing variations or fluctuations from whatever the
>"normal situation" is, without positive aim or objective. The problem with
>"control" is that most people understand it as domineering, subjecting,
>having power over, again not necessarily with any positive aim or goal.
>The advantage of "control" is that it is more dynamic than "regulation"
>since you can also exert control by making somebody do something very
>"irregular". Moreover, "control" as a term tends to be more common in the
>To make clear that you also need a progressive force, aiming to achieve as
>yet non-existent situations, I have liberally used the term
>"goal-directedness". This has another disadvantage, though, namely that
>people tend to see a goal as a specific end state:
>>A goal is a preferred state of the system. But societies and biological
>>populations have no preferred state. Nevertheless they can improve their
>>performance (adaptation and adaptability) without setting goals.
>For me "control" or "goal-directed behavior" CAN be aimed at continuing
>improvement, without any fixed end-point. I have tried to explain this in
>the paper by noting that a goal can be defined in such a way that it
>encompasses continuing progress or change. Originally, the passage
>explaining this was a little longer, but because of space constraints
>Cliff shortened it. My point is that a goal is indeed defined as a
>preferred state (or more generally set or fuzzy set of states), but that a
>system's state is fundamentally a distinction, and a distinction is a
>relation, not an independent "state of the world". The fundamental
>distinction is between "better" and "worse". Whenever a system can make a
>distinction between "better" (a situation it prefers) and "worse" (a
>situation it would rather avoid), you can say it is goal-directed in the
>most general sense.
>The heat-seeking missile is an elementary example, where the missile will
>prefer any move that brings it closer to its target, while the target
>itself may move in the most dynamic and irregular fashion. The missile
>still has an "end-state" in that once it has reached the target, its
>A perhaps better example is when you have the (relational) goal of making
>ever more money on the stock exchange. This is a goal where you will never
>reach an end-state where you can stop. Yet, activity directed at this goal
>can be described by the basic cybernetic control mechanisms of buffering
>(keeping safe investments as a reserve in case things go wrong),
>feedforward (speculating on stocks that you think will go up) and feedback
>(selling stocks that have performed below your expectations).
>Wouldn't you call such activity "goal-directed"? Although the goal here is
>purely dynamic, and most people think spontaneously about static goals
>when they hear the word "goal", I don't really see a better term. Or does
>anybody know of a term that describes behavior with an in-built preference
>for certain outcomes over others, but no in-built end-state?
>Note that such preference does not even assume that you "know" what you
>prefer, as in the money-making example. Imagine that you are browsing
>through a collection of pictures (e.g. on the web, in an artbook or in a
>museum), and that there are some you like more, others you like less. You
>will act so as to spend a lot of time looking at the pictures you like,
>while you will quickly turn the page on the pictures you don't like. This
>is goal-directed behavior in my definition, although you would not in any
>way be able to formulate which kind of pictures you would rather see. Your
>goal is not so much "to see pictures A, B and C", but "to derive esthetic
>pleasure from the material". Since you cannot a priori say which are the
>pictures you will enjoy most, so that you could look them up in the index
>and go straight to the corresponding page (feedforward), your only control
>strategy is feedback: you go to the next page, and if you don't like the
>picture, you turn the page again (negative feedback), otherwise you stay
>Shann also raises the issue of control without feedback. As we have tried
>to make clear in the paper, thinking that you can control something
>without feedback is a delusion. In the short term you can establish some
>kind of a command or dominance merely by buffering and feedforward, but
>neither mechanism is perfect, and the errors they let slip through will
>accumulate until you end up with an "error catastrophe", i.e. the system
>has deviated so far from its ideal state that it gets destroyed.
>As Val Turchin explains in his book "The Inertia of Fear", this is the
>real reason why totalitarian systems such as Soviet communism eventually
>fail: because they lack error-correcting feedback, mistakes accumulate
>until the system is no longer viable. Of course this is a simplification:
>the Soviet system did have some feedback from the society back to the
>authorities (even the staunchest bureaucrats would notice that the 5
>year-plans did not achieve their goals and therefore would attempt some
>corrections), otherwise it wouldn't have survived as longs as it did. But
>this feedback was severely deficient compared to the systematic feedback
>mechanism that forms the heart of democracy (ineffectual politicians are
>voted out) and of the market (products that are in demand increase in
>price, thus stimulating producers to supply more of them).
>In conclusion, the different cases that Shann and Alexei distinguish
>(with/without feedback, with/without goal) are for me not really distinct,
>but all part of the general phenomenon that I call control or
>goal-directedness. Sometimes goals are more explicit or dynamic, sometimes
>less, but in the phenomena we discussed there is always some "preference
>function" that would rather achieve one situation than another one.
>Feedback too can be more or less prominent, but any system that wishes to
>survive in the long term needs some form of feedback.
>>For example, Tannenbaum (1962: 5) defined 'control' as "any process in
>>which a person or group of persons or organisation of persons determines,
>>i.e. intentionally affects, what another person or group or organisation
>>will do". This definition provides a word/concept to describe a
>>situation where no standard of performance is required.
>This doesn't look like a good definition to me. You can affect people in
>all kinds of ways (e.g. I can kick somebody in the butt), but that doesn't
>mean you control them (e.g. my victim can kick back twice as hard).
>Control to me implies some kind of on-going state of "affecting", relative
>to an enduring goal. Note that the goal is anyway implicit in the word
>"intentionally" (what is an intention other but a goal?). The advantage of
>this "short-term" view of affecting, is that you might conceivably do it
>without feedback, but I would anyway not call this "control".
>>Can you or anybody else on this list provide references in the cybernetic
>>literature which makes these distinctions, and defines the concepts and
>>language, that I find useful to analyse the information and control
>>systems (cybernetic architecture) of social organisations?
>I would like to see such references myself. In my experience, the
>literature is as confused in its terminology as our discussion here. In
>the end I might get tempted to invent a new Latin or Greek term (like
>"cybernetics" itself was invented, or more precisely re-invented, by Wiener).
>>I agree with Shann that control can exist without a goal (unless we
>>stretch the meaning of a goal beyond its limits). I view control as an
>>ability of an agent to change its behavior. Neither
>>deterministic nor stochastic systems are agents because they have no
>>control of their behavior. Watt's regulator is not an agent, and it has no
>>control of its behavior. The pressure in the tank is regulated but there is
>>no control here. It is an engineer who has control of Watt's regulator, and
>>he has a goal of maintaining the pressure.
>Implicit in your definition of control I do see some form of goal. If you
>say that an agent can change its behavior, you implicitly assume that the
>agent has some intention to change, since you a priori exclude
>deterministic or stochastic systems, of which I can find many examples
>that do change their behavior, although they may not "want" to do it.
>Watt's regulator does not have control over its behavior, I agree. But I
>would say that it has control over the behavior of the steam engine that
>it is regulating, because it can change that behavior guided by its
>in-built goal. It doesn't have control over this goal, though, and
>therefore its behavior from the outside can be seen as deterministic. But
>if you would perfectly know all the goals that steer the engineer's
>behavior, you might claim that the engineer too is behaving
>deterministically, and does not have any control.
>I am starting to suspect that the whole discussion about "free will" or
>"freedom" is so confused because these concepts only make sense RELATIVE
>TO A GOAL, while this aspect is completely ignored in the traditional
>discussions that merely oppose determinism and indeterminism
>(stochasticity). I have always thought that (in)determinism is a red
>herring, since the world in practice is always partly predictable, partly
>unpredictable. "In principle" predictability, like in Laplace's view of
>the universe, has no meaning whatsoever in concrete situations.
>When we speak about "freedom" in practice, we mean "control", and as I
>have argued "control" means the ability to do what you WANT to do, i.e.
>act upon things according to your own goals or intentions rather than
>according to the constraints imposed by the environment. Without goals,
>you wouldn't have any preferences, and therefore you will merely drift
>along (stochastically or deterministically), following the push and pull
>of your environment, without any directed intervention.
>Francis Heylighen <email@example.com> -- Center "Leo Apostel"
>Free University of Brussels, Krijgskundestr. 33, 1160 Brussels, Belgium
>tel +32-2-6442677; fax +32-2-6440744; http://pespmc1.vub.ac.be/HEYL.html
>Posting to firstname.lastname@example.org from Francis Heylighen <email@example.com>
P.O. Box 266 Woollahra, Sydney, Australia, 1350
Ph: +612 9328 7466 office; +612 9327 8487 home; Fax: +612 9327 1497;
Life long E-mail:
Papers at: http://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=26239
with other papers & book at http://cog.kent.edu/library.html
Posting to firstname.lastname@example.org from Shann Turnbull <email@example.com>
This archive was generated by hypermail 2b29 : Fri Mar 02 2001 - 05:10:07 GMT