[Next] [Previous] [Top] [Contents]

The Constructability of Artificial Intelligence - Bruce Edmonds

2. The Constructability of TMs


Many others have argued against the validity of the CTH when interpreted onto physical processes. I will not do this - my position is that there are reasons to suppose that any attempt to disprove the physical CTT are doomed (
Edmonds, 1996). What I will do is argue against the inevitability of being able to construct arbitrary TMs in a deliberate manner. To be precise what I claim is that, whatever our procedure of TM construction is, there will be some TMs that we can't construct or, alternatively, that any effective procedure for TM construction will be incomplete.

The argument to show this is quite simple, it derives from the fact that the definition of a TM is not constructive - it is enough that a TM could exist, there is no requirement that it be constructable.

This can be demonstrated by considering a version of Turing's `halting problem' (Turing, 1936). In this new version the general problem is parameterised by a number, n, to make the limited halting problem. This is the problem of deciding whether a TM of length*1 less than n, and input of length less than n will terminate (call this TM(n)). The definition of the limited halting problem ensures that for any particular n it is fully decidable (since it is a finite function which could be implemented as a simple look-up table).

However there is not a general and effective method of finding the TM(n) that corresponds to a given n. Thus what ever method (even with clever recursion, meta-level processing, thousands of special cases, combinations of different techniques etc.) we have for constructing TMs from specifications there will be an n for which we can not construct TM(n), even though TM(n) is itself computable. If this were not the case we would be able to use this method to solve the full halting problem by taking the maximum of the TM and input's length finding the corresponding TM(n), and then running it for the answer. A more complete formal proof may be found in the appendix.

What this shows is that any deterministic method of program construction will have some limitations. What it does not rule out is that some method in combination with input from a random `oracle' might succeed where the deterministic method failed. The above arguments now no longer hold, one can easily construct a program which randomly chooses a TM out of all the possibilities with a probability inversely proportional to the power of its length (using some suitable encoding into, say, binary) and this program could pick any TM. What one has lost in this transition is, of course, the assurance that the resulting TM is according to one's desire (WYGIWYS - what you get is what you specified). When one introduce's random elements in the construction process one has (almost always) to check that the results conform to one's specification.

However, the TT (even the LTTT) is well suited to this purpose, because it is a post-hoc test. It specifies nothing about the construction process. One can therefore imagine fixing some of the structure of an entity by design but developing the rest in situ as the result of learning or evolutionary processes with feedback in terms of the level of success at the test. Such a methodology points more towards the constructivist approaches of (Drescher, 1991, Riegler, 1992 and Vaario, 1994) rather than more traditional `foundationalist' approaches in AI.


The Constructability of Artificial Intelligence - Bruce Edmonds - 17 JUN 99
[Next] [Previous] [Top] [Contents]

Generated with CERN WebMaker