[Next] [Previous] [Top] [Contents]

The Constructability of Artificial Intelligence - Bruce Edmonds

3. Artificiality and the Grounding of Knowledge


At the end of the previous section, I raised the possibility that an entity that embodied a mixture of designed elements and learning in situ (using a source of randomness), might be employed to produce an entity which could pass the LTTT. One can imagine the device undergoing a training in the ways of humans using the immersion method, i.e. left to learn and interact in the culture it has to master.

However, such a strategy, brings into question the artificiality of the entity that results. Although we can say we constructed the entity before it was put into training, this may be far less true of the entity after training. To make this clearer, imagine if we constructed `molecule-by-molecule' a human embryo and implanted it into a woman's womb so that it developed, was born and grew up in a fashion normal to humans. The result of this process (the adult human) would certainly pass the LTTT, and we would call it intelligent, but to what extent would it be artificial? We know that a significant proportion of human intelligence can be attributed to the environment anyway (Neisser et al., 1996) and we also know that a human that is not exposed to language at suitable age would almost certainly not pass the LTTT (Lane, 1976). Therefore the developmental process is at least critical to the resulting manifestation of human intelligence. In this case, we could not say that we had succeeded in creating a purely artificial intelligence (we would be on even weaker ground if we had not determined the construction of the original feotus but merely copied it from other cells).

The fact is, that if we evolved an entity to fit a niche (including that defined by the TT or LTTT), then is a real sense that entity's intelligence would be grounded in that niche and not as a result of our design. It is not only trivial aspects that would be need to be acquired in situ. Many crucial aspects of the entity's intelligence would have to be derived from its situation if it was to have a chance of passing the LTTT. For example: the meaning of its symbols (Harnad, 1990), its social reality (Berger, 1966) and maybe even its `self' (Burns and Engdahl, 1998) would need to have resulted from such a social and environmental grounding. Given the flexibility of the processes and its necessary ability to alter its own learning abilities, it is not clear that any of the original structure would survive. After all, we do not call our artifacts natural just becuase they were initiated in a natural process (i.e. our brains), so why vice versa?


The Constructability of Artificial Intelligence - Bruce Edmonds - 17 JUN 99
[Next] [Previous] [Top] [Contents]

Generated with CERN WebMaker