A very good question! (of course the answer may depend on our
contexts)
> Consider a certain card game that involves the concept of trump
> suits and ..... <<stuff deleted>> ..... There are no self-
> references or indeterminants (in any particular context).
> In fact, the game was designed so as to always be computable
> within some larger meta context.
I think this is the key point. When (and if) a meta-context (1)
exists (2) exists and is determined (3) exists and can be formalised
(3) exists, can be formalised and can be computed from the original
context set.
> An interesting project might be to try to design a game that
> cannot be formalized or put on a computer. I would venture
> to guess, that such a "game" would have to involve multiple
> simultaneous contexts (or simply some kind of indeterminant
> within a paraticular context). Imagine the case where both
> clubs and diamonds were both trump...and a 6 of clubs and a
> 6 of diamonds are laid down (and there is no other rule in
> the game to resolve the trick). Who wins? A computer
> wouldn't be able to decide. In real life, the two people
> would probably start to argue...thus changing the context
> of the game...and settle the dispute by some other means...
> something the computer couldn't do because it was not pre-
> programmed from the start to anticipate such an event.
I think we should try to keep the concepts of indeterminancy and
formalisability (or computability) seperate. In the above situation
there is not a consistent meta-context, so at this level the rules do
not determine the outcome (either for a computer or a human). Both a
computer and a human could compute the total possibilities for this
situation. There have been formalisations to cope with
indeterminancy around for many years (modal logic, markov chains,
Post production systems, etc.).
> If human intelligence involves the ability to jump out of a
> system, change the context, and resolve something that wasn't
> previously resolvable, then it seems to me that an artificial
> device would also have to be able to do this if it was to be
> considered intelligent.
I agree. I do think there are distinct limits to our ability to do
this.
> I'm not aware of any existing
> computational devices that can do this.
There are now several. They have a language with reflective
operators so they can induced (and deduce) theorems about their own
deductive (and inductive) inference machinery. Below is a (fairly
arbitrary) selection of some references:
Costantini,S; Dell'Acqua,P; Lanzarone,GA (1992): Reflective Agents
in metalogic Programming. Lect. Notes Comp. Sci. 649, 135-147.
Konolige,K (1988): Reasoning by introspection. In: Meta-Level
Architectures and Reflection. (Eds: Maes,P; Nardi,D) Elsevier
Science, Amsterdam, 61-74.
Perlis,D (1985): Languages with Self-Reference I: Foundations.
Artif. Intell. 25, 301-322. Perlis,D (1988): Languages with
Self-Reference II: Knowledge, Belief and Modality. Artif. Intell.
34, 179-212.
----------------------------------------------------------
Bruce Edmonds
Centre for Policy Modelling,
Manchester Metropolitan University, Aytoun Building,
Aytoun Street, Manchester, M1 3GH. UK.
Tel: +44 161 247 6479 Fax: +44 161 247 6802
http://bruce.edmonds.name/bme_home.html