I cannot speak for Don nor do I understand Rosen well enough to discuss his
ideas, so I do not know if my ideas are consistent with theirs. I expect
that ideas will resemble one another the closer they approximate to a valid
representation of reality, and reflect valid methods in relation to that
reality. The task as one of devising models which are as close to
experience as possible - as well as logically coherent and consistent. But,
like physicians who are puzzled over a case, I would see the proper
approach to be one not just of comparing existing diagnostic theories but
of examining the patient more carefully, as well as looking for
interdisciplinary clues and insights.
According to Rosen:
>Suppose we want to determine...whether a particular pattern x
>manifests a certain feature [P]...The conventional answer is to produce a
>meter, or feature-detector; another different system (y which does not
>equal x) which recognizes the predicate P, and hence in particular
>determines the truth or falsity of P(x).. . .
I would begin with the primary feature-detector as direct human perception,
the phenomenology of events - of signs and symptoms, as it were - with
which we are all individually familiar. While we try to standardize this
direct experience, and to reach consensual validation about it, such a
metric or standardization is a concept of a more abstract order, not the
immediately given experience or "data".
>Suppose we want to determine...whether a particular pattern x
>manifests a certain feature [P]..
Such a determination would appear to be an empirical question, not simply a
matter of the manipulation of abstract objects, e.g. logic and mathematics;
in the latter case it would be a matter of definition. Suppose we want to
determine if a patient's blood pressure is elevated beyond the normal. We
will want to take several readings under conditions known to be optimal.
After a number of observations we will reach a level of consistency on the
basis of which we will make a decision to accept the result.
(A problem may arise where repeated findings diverge rather than converge,
but that is another kind of problem, more intractiable, which requires
identification and a very different strategy. Many complex social and
political problems have this characteristic, and do not converge on any
clear definition or solution.)
> We see an unpleasant incipient
>infinite regress in the process of information. . . .
This would be an indicator of faulty method, a sort of software problem.
>There are several [two] ways out of this situation. The first is somehow
>to have "independent knowledge" of what the initial feature is; some kind
>of extraneous list or template which characterizes what is being recognized
>at the first stage . . .
Yes. Insofar as the primary observations can only be those of independent
observers this would appear to be the case. (The word "template" suggests
that some preconceived ideas is already being brought to bear, which is not
part of the primary process of perception. Probably this is also a matter
of degree.)
>But where does such "independent knowledge" come from? At least, it
>comes from outside the system itself, and thus would violate the basic
>tenant of formalization, that there is no such "outside".
It is important to keep certain distinctions clearly in mind.
Some information initially comes via perception from outside the system or
organism. Only after this information has been processed and elaborated
within some formal/conceptual system of thought do we usually speak of
"knowledge".
So, in a sense, knowledge is already bound by the formal requirements of
its conceptualization. Since this is consensual it is to that extent no
longer completely independent. (Thus "independent knowledge" may be a bit
of an oxymoron - a contradiction in terms.)
>Would it be true that if a person did not absolutely insist on a formalism,
>or pure objectivity, then this "from outside" would not be a problem? Is
>this basically your point. . .
Yes. Formalisms are relative (to the data), not absolute. Objectivity is
not pure, but depends, among other things, on a intersubjective consensus
related to data and experience.
>Don has defined complexity as occurring whenever you have two or
>more formalisms that can not be mapped into each other...
I also understood this to mean or perhaps imply, inter alia, that formal
models are only capable of rendering limited, if well defined, aspects of
any real world situation. The world itself is more complex than we can
represent in any single map, so we use special-purpose maps (as geographers
use demographic, economic etc. maps). My own view is that complexity is
inherent in the world, and now and again we may identify complexity in
relationships that we have reason to think exist and may connect but also
lie beyond our current formalisms. In particular, complexity inheres where
convergent solutions cannot be found, where complications are found to
multiply in time, as they may do in models which reflect real world
situations.
>because infinity finitely fragmented still yields infinities,
>finite fragmentation can not resolve the infinite regress.
While valid enough,I would suspect that this is a truism which is no more
than an indication of an inappropriate method.
Again, this is all off the top. I take this as an opportunity to clarify my
own thinking on matters of fundamental importance, and will follow any
further discussion with great interest.
Cheers and best wishes!
Bruce B.
Bruce Buchanan, M.D.
buchanan@hookup.net
"We are all in this together"