Comments on my Humanity 3000 statement

Francis Heylighen (fheyligh@VUB.AC.BE)
Mon, 11 Jan 1999 17:04:44 +0100


There have been a lot of reactions, from people on different mailing lists,
to my Humanity 3000 statement. I will now reply to the most important
reactions one by one.

Let me first note that the statement was restricted to two pages, and
therefore it was impossible to pay attention to all the implications and
clarifications that would otherwise have been needed with some of these
quite bold statements. I hope that the following comments will add some of
the necessary context and details.

Tom Abeles <tabeles@tmn.com>:
>Heiner Benking posted Francis Heylighen's draft for the Humanity 3000
>conference and asked for responses. I have a few questions:
>
>1) What happens if Ray Kurzweil is right and before the next millenium
>the earth sees silcon based intellegence which can pass the Turing test
>and which will have the access and speed which will be far superior to
>carbon based processors?

In the "global brain" view to which I subscribe (see
http://pespmc1.vub.ac.be/SUPORGLI.html), people (carbon) and computers
(silicon) will live in ever closer symbiosis, the one supporting the other
in those domains where the one is better than the other. For example,
though computers have a better memory for data, and faster processing
power, humans are better at recognizing patterns, using experience and
interacting with the world. In the end, the borders between carbon and
silicon will become ever less relevant, and the one will merge naturally
into the other. Therefore, there is no real competition between humans and
machines, and no danger of the one "taking over" from the other one.

>2) What if, as in Gibson's short story, The Swarm, a collective
>intellegence, much like Star Trek's "Borg" emerges or appears on earth?
>Snf, ture to form, sees the individual as an inefficient and unnecessary
>vehicle for its activities?

According to the theory of metasystem transitions
(http://pespmc1.vub.ac.be/MSTT.html), individuals being "integrated" into a
higher order system, such as the global brain, will not lose their
individuality, but rather be stimulated to develop even further their
unique capabilities which differentiate them from others (a metasystem
transition means integration + differentiation). Although "integration"
implies that you have to give up some freedoms, basically those to harm the
other individuals that together with you form a superorganism, you will
gain much more new freedoms, becoming able to do things that you could not
even imagine before.

The "swarm" or "Borg" model of collective intelligence is a very poor one
compared to the "global brain", since it is inspired by insect societies,
such as bee hives or ant nests, where the members have no individuality to
start with. The fact that termites are "integrated" into a termite colony
does not make them less individual or less free than their "non-integrated"
cousins, the cockroaches. Although the termite colony has a higher
collective intelligence than a bunch of cockroaches, neither intelligence
is in the least comparable to that of a single human individual, much less
to a group of individuals working effectively together.

>3) what if our "humanisess", however defined, is not restricted to a
>carbon based body or confined to a small blue planet and its finite
>physical resources? How close will we be to Dougla Dixon's world of "Man
>After Man"?

I don't know Douglas Dixon. I agree that our humanness is neither
restricted to a carbon-based body (see earlier comments about
carbon-silicon symbiosis), nor to the planet Earth. In the global brain
view, the essence is the pattern of organization, of thinking, and that can
be realized as well on other planets or in other materials.

>4) What happens if the paradigm of complexity, when fully developed
>shows that Newtonian models are only very special cases of some larger
>way to perceive the universe?

I think that has already happened. Newtonian models are only applicable in
those specific cases where the system is linear, deterministic, perfectly
knowable, reversible, etc. Very few systems are like that. However, to
model non-Newtonian systems, you need much more complex modelling
techniques and more cognitive powers. That is what a global brain may
potentially offer.

Heiner Benking:
>>2. The development of a universal "world view" that ties all our
>>knowledge together and shows us how we fit into the larger whole of
>>evolution,

> Why universal ? This can be very dangerous, who can create, see, handle
>such universal thing. I know you have published a nice brochure on WORLD
>VIEWS but we have to go beyond only asking questions and or writing
>something beyond the stars and beyond human or any capacity.&nbsp; Why not
>go for bouquets or concerts, world-view compositions, as we have proposed
>at presentations at the KONRAD LORENZ INSTITUTE in Altenberg, Vienna? Why
>not go for switching systems, where models and systems, vary flexible
>designs, prototyps and production systems are already in place, sometimes
>since more than 20 years ( like the ICC from I. DAHLBERG) and add other
>open spaces or grids like (Tony Judges FC)??

A "world view" in my definition (which is maybe not the most common one)
encompasses a multiplicity of perspectives, views, theories and models. By
definition, you cannot grasp the "world" into a single model. A "world
view" therefore must be a *metamodel*, a model of all the possible models
that you could make, and how they are related. In that way, the world view
would allow you to choose the best model for the particular circumstances
and purpose, and a different model for different circumstances and
purposes. But these models would not be independent, but related to each
other by a clear, overarching framework. As a metaphor, you might compare
the models to different 2-dimensional projections ("perspectives" or
photos) of an object, and the "world view" as a 3D, virtual reality
representation of that object. Of course, a true world view would be much
more complex, but I hope you get the point.

>Next question: shouldnt't we create the "meaning" ourselves? Isn't telling
>right from wrong one of these old dualistic models, and should your or my
>"knowbot" tell you!?, the dangers are wide open and since long discussed

We will create the meaning since we will be creating the world view
ourselves. The world view will also not solve every specific problem or
guide you in any specific situation: there is plenty of room to take
decisions and find more specific meaning yourself. However, when making
decisions that affect the whole of society (e.g. Should we allow human
cloning? Should we develop nuclear energy? Should we stimulate abstinence
rather than condoms to combat the spread of AIDS?) we need a consensual
framework to reach decisions with a minimum of conflict.

With such deep-going, long term issues, you cannot rely on polls, political
decisions or court cases to decide about long term policies, or to tell
"right" from "wrong". I don't mean to imply that "right" and "wrong" exist
as absolute categories, only that every decision implies that you
distinguish better from worse alternatives, and this assumes a value
system. The only alternatives are conflict and inaction.

>>The single most important opportunity is the emergence of a world-wide,
>>intelligent, computer network, a "global brain",

> I think this is very dangerous - the founder of your center LEO APOSTEL
>wanted to work with CHILDREN on these issues and not "delegate" to an
>intransparent global "godfather". We like the model of GAIA, as sir John
>Trevellyan supported Lovesick and Ladies like Margulis and Satorius and
>bringing such ideas like GAIA home.

The global brain is not a "godfather", it is the collective intelligence
emerging from us, individuals, using our own ideas and insights and
interacting with each other, as supported through a flexible and
transparent computer network. There is no contradiction with the Gaia
concept. On the contrary, the global brain can be seen a nervous system for
Gaia, the living system formed by the Earth and all its human, anima and
vegetal inhabitants. On the other hand, the global brain concept goes
farther than Gaia if you assume, as I noted earlier, that life and
intelligence could spread to other planets.

>It is naive to believe that such a system as pointed out by Francis would
>help us to overcome the overload, because it is not the volume outside but
>the mission visions, ideas, order, structures, inside which get us into
>problems, into confusion, when we think we know but do not.
>Do you want a top down approach of an inpersonal and intransparent "global
>whatever" develop world views for us, and we just need to believe and
>adopt them!? I believe we need a bottom up approach, just as Robert Jungk
>(Bob) told me to continue this work with children. Designing a world view
>is one thing, but to inhabit it, embody it, is the challenge.

As I said, the global brain is an emergent system, that is, not a system
imposed from above, but a system self-organizing out of its constituents
(bottom-up). The emergence of a world view, to me, seems like one of the
first, fundamental stages in this emergence of a global brain. The computer
network is there to help us cope with the incredible amount of ideas, data
and knowledge that is available and that needs somehow to be integrated
into this encompassing conceptual framework. We might be able to achieve
that without a computer network, but it would just take much more time.
Given the risks for conflicts and ecological destruction due to the lack of
an agreed-upon system of values, I think we'd better not lose time in
building such a integrating world view.

>>Borderless exchange and discussion of ideas would make it easier to reach
>>a supranational consensus on values and global policy,
> Again this dream and nightmare of borderless. We have to have borders and
>boundaries, even walls in order to reach to the other place. We should
>create walls and borders in order to take them away, always change the
>position and so go beyond this trap of thinking flat, another way of the
>old paradigm of dualism.

I only speak here of borders in terms of national borders. Every knowledge
is based on distinctions, that is, boundaries that separate one phenomenon
from another, so you do need boundaries. On the other hand, different
perspectives on the same phenomenon will draw different boundaries. Only a
world view in my meaning of the term will be able to both accept all these
different ways of cutting up the world, and provide a integrating framework
that tells us how they fit together.

> The danger is to create another "religion" - a super religion, if you
>design something abstract and "superhuman".. Please watch your metaphors
>and models. Who and where is in your model of the body and the cell , the
>mind and the heart??. Are you going to built an ants - world only? We
>might need to have a lot of alternative ideas to try in parallel, this
>looks very dogmatic or fatalistic, even when on the other hand very
>scientific and sophisticated.

It depends on what you call a "religion". In the original Latin meaning of
the word, religion means that which binds things together, and that is what
a world view/global brain should do. In its more conventional meaning, a
religion is a body of belief that should be accepted without questioning.
In that sense, the world view should definitely *not* be a religion:
constant questioning and looking for alternatives or improvements should be
one of its defining characteristics.

You might compare it to the scientific method, which does not tell us how
the world is, but simply how we can build ever better models of that world,
without any of them ever being absolute or definitive. In that sense, the
presence of many alternative ideas ("perspectives") in parallel is a basic
requirement for the world view I envisage. However, these perspectives
should not just exist independently, it should be possible to compare them,
and decide which one is best (better) in which circumstances, like the
scientific method gives you a criterion to (sometimes) choose between rival
theories.

>My vision is that of an "inhabited extension virtual reality" (a
>cyberspace which allows broader and real navigation, navigation where you
>have shared maps and models humans can understand and live with), where
>you share and move and feel as you do in natural space. Where you learn to
>translate and transcend are are not fixed to either one model or a
>superhuman ghost, who was designed by someone outside the "transparency"
>corridor.

This vision is rather similar to the one I propound, except that you don't
speak about the need for shared values. It is good to support different
models, but if *you* always choose one and *I* choose another, and the two
models make opposite recommendations for action, then either we get into
conflict, or the whole thing freezes into inaction. The value system should
help us to resolve our differences, without a priori imposing one
perspective as the only correct one.

John McLaughlin:
> I don't know; if
>people understand clearly that their interests are opposed to one another,
>won't that just lead to intensified, "smart" warfare to overcome their
>rationally-understood opponents? Or is there perhaps some means by which
>intelligence by itself will vaporize those opposing interests, so that
>people will come to understand that they really don't oppose one another,
>rightly understood? I don't know how that would work; so far in human
>history, I've seen little evidence that it has. Could you show me?

Josep Ll. Ortega:
>I think the problem is not only how to acces and process too much
>information, but how to act on it in a purposeful and goal oriented way.
>Ourselves as organisms have milions of cells, but act single-minded. How can
>society or a global brain do the same while respecting individual freedom
>and fullfilment? I think this is not only a theoretical problem, the future
>of humanity depends probably on finding some kind of practical answer.

This is exactly the point I was making: increased intelligence, or support
for building different models, may help reduce conflicts, but it may also
exacerbate them. That is why we *moreover* need a shared value system.
But, as Josep notes, it is as yet everything but obvious how to achieve
that. My only point is that whatever process we use to build such a value
system, an intelligent, global network is likely to make that process
easier.

>>3. Supranational integration and global management meet with huge
>>resistance, because of the intrinsic selfishness of nations and groups, who
>>are unwilling to give up their privileges for the common good.
>
>This is a political statement which could be a little simplistic. There are
>some very cogent criticisms of uncritical globalization, specially in the
>sense of some groups or corporations taking all profit of the global economy
>while remaining outside democratic control.

When people speak about globalization nowadays, they are basically speaking
of globalization of markets and the capitalistic system, and this obviously
has many negative effects. The globalization I speak of should also
encompass transnational integration on the level of social and ecological
policy, thus giving us a tool to counteract these negative effects.

Yehezkel Dror:
>But
>I wonder whether considering the future of humanity in the year 3000 is
>really worth serious effort (I express myself as delicately as I can).
>What will be in the year 3000 is not a matter of uncertainty but
>inconceivability surely beyond present human thinking. Enough to consider
>the ideas around in the year 1000 and compare them to present realities.
>Or, to move from empiric to theoretic arguments, (a) only those who do not
>really think in nonlinear terms can presume to discuss the year 3000, all
>the more so as (b) probably the very foundations of our thinking, such as
>their cognitive-neural basis, will be changed, with pre-change thinking on
>post-change situations being logically nonsense.
>
>Not that such thinking on 3000 will do any harm, but the same brain
>resources devoted to the year 2100 may do some good.

I tend to agree. Given the speed and radicality of change, we can hardly
hope to look further than 2100. But it was not me who chose the "Humanity
3000" theme.

_________________________________________________________________________
Francis Heylighen <fheyligh@vub.ac.be> -- Center "Leo Apostel"
Free University of Brussels, Krijgskundestr. 33, 1160 Brussels, Belgium
tel +32-2-6442677; fax +32-2-6440744; http://pespmc1.vub.ac.be/HEYL.html