Re: From WWW to Super-Brain

Bruce Buchanan (buchanan@HOOKUP.NET)
Thu, 5 Jan 1995 17:28:46 -0500


Francis writes (Thu, 5 Jan 1995) -

>I have been doing some thinking during the holidays, and came up with the
>following short essay, sketching a possible scenario for the next
>metasystem transition leading to a super-being via the present World-Wide
>Web.. . .

>As always, comments and criticisms are very welcome.

I have read this essay with considerable interest. It seems to me noteable
for the absence of consideration of logical contradictions (existential
aspects would be harder to deal with) which might require to be dealt with
at higher meta-levels in terms of valuation criteria.

Am I mistaken in thinking that values - i.e. higher level control and
valuation criteria - may be required to structure and inform the
meta-levels?

I would not, of course, suggest the imposition of particular values. But I
would think it important to recognize the likelihood that incompatible
functions at lower levels will cause conflicts if any synthesis or activity
is to occur at meta-levels. Most of us are already quite familiar with
software problems. A recent article in Scientific American has suggested
that larger scale projects are likely to become intractable (cf. the Denver
Airport baggage handling problems). These are possibilities which seem
inherent in multilevel structures and functions, and might best be
recognized in the planning stages.

In your paper you do mention values -

> . . . [The] controlled development of knowledge
> requires a metamodel: a model of how new models are created and evolve. Such
> a metamodel can be based on an analysis of the building blocks of knowledge,
> of the mechanisms that combine and recombine building blocks to generate new
> knowledge systems, and of a list of values or selection criteria, which
> distinguish "good" or "fit" knowledge from "unfit" knowledge. . . .

I guess my question might be: how, in their turn, are these selection
criteria selected, and how implemented? It's the old question: who guards
the guardians?

Is it envisaged that such a super-brain would serve all purposes
indiscriminately? Would it have no purposes of its own, other than
housekeeping? How would it be distinguished from the technologies that
compose it? In short, in what sense could it be called a "brain" rather
than a dynamic repository or transit station? Would it be a technology
which sets values without due consideration?

I would be very interested in the views of others, and perhaps discussion.

Bruce Buchanan

Bruce Buchanan