1 Introduction
2 The inadequacy of the design stance for implementing a deeper sociality
3 A model of self construction
4 General consequences of this model of self construction
5 Towards implementing self-constructing agents
6 Consequences for agent production and use
7 Conclusion
References
The basic idea is to put the human into the developmental loop of the agent so that the agent co-develops an identity that is intimately bound up with ours. This will give it a sound basis with which to base its dealings with us, enabling its perspective to be in harmony with our own in a way that would be impossible if one attempted to design such an empathetic sociality into it. The development of such an agent could be achieved by mimicking the early human development in important respects – i.e. by socially situating it within a human culture.
The implementation details that follow derive from a speculative theory
of the development of the human self that will be described. This may well
be wrong but it seems clear that something of this ilk does occur in the
development of young humans (Werner 1999, Edmonds
& Dautenhahn 1999). So the following can be seen as simply a method
to enable agents to develop the required abilities – other methods and
processes may have the same effect.
Thus rather than specify directly the requisite social facilities and mechanisms I take the approach of specifying the social "hooks" needed and then attempt to evolve the social skills within the target society. In this way key aspects of the agent develop already embedded in the society for which it is intended to deal with. In this way the agent can truly partake of the culture around it. This directly mirrors the way our intelligence is thought to have evolved (Kummer et al. 1997).
In particular I think that this process of embedding has to occur at an early stage of agent development for it to be most effective. In this paper I suggest that this needs to occur at an extremely basic stage: during the construction of the self. In this way the agent’s own self will have been co-developed with its model of others and allow a deep empathy between agents and its society (in this case us).
This model is as follows:
2. The agent does not have direct access to the workings of this basic process but only of its perceptions and actions, past and present.
3. This basic process seeks to model its environment and control it via its actions, including the other agents it can interact with. In particular is attempts to model the consequences of its actions (including speech acts).
4. This process naturally picks up and tries out selections of the communications it receives from these other agents and uses these as a basis (along with observed actions) for modelling the decisions of these agents.
5. As a result it becomes adapt at using communication acts to fulfil its own needs via others actions using its model of their decision making processes.
6. Using the language it produces itself it learns to model itself (i.e. to predict the decisions it will make) by applying its models of other agents to itself by comparing its own and others’ acts (including communicative acts). The richness of the language allows a relatively fine-grained transference of other’s decision making processes onto itself.
7. It refines its model of other agent’s using its self-model and its self-model from its observation of other’s actions. Thus its model of other’s and its own cognition co-evolve.
8. Since the model of its own decisions are made through language, it uses language to implement a sort of high-level decision making process – this appears as a language of thought.
1. A suitable social environment (including humans)Some of these are requirements upon the internal architecture of an agent, and some upon the society it develops in. I will briefly outline a possibility for each.2. Sufficiently rich communicative ability – i.e. a communicative language that allows the fine-grained modelling of others’ states leading to action in that language
3. General anticipatory modelling capability
4. An ability to distinguish the experience of different types, including the observation of the actions of others; ones own actions; and other sensations
5. Need to predict other’s decisions
6. Need to predict one’s own decisions
7. Ability to reuse model structures learnt for one purpose for another
The agent will need to develop two sets of models.
(I) A set of models that anticipate the results of action, including communicative actions (this roughly corresponds to a model of the world). Each model would be composed of several parts:
The social situation of the agent needs to have a combination of complex cooperative and competitive pressures in it. The cooperation is necessary if communication is at all to be developed and the competitive element is necessary in order for it to be necessary to be able to predict other’s actions (Kummer et al., 1997). The complexity of the cooperative and competitive encourages the prediction of one’s own decisions. A suitable environment is where, in order to gain substantial reward, cooperation is necessary, but that inter-group competition occurs as well as competition for the dividing up of the rewards that are gained by a cooperative group.
Many of the elements of this model have already been implemented in pilot systems (e.g. Drescher, 1991; Edmonds, 1999; Stoltzmann et al., 2000), but there is still much to be done.
To achieve this goal we will have to at least partially abandon the design stance and move more towards a stance of an enabling stance and accept the necessity of considerable culturation of our agents within our society much as we do with our children.
Aydede, M. and Güzeldere, G. (forthcoming). Consciousness, Intentionality, and Intelligence: Some Foundational Issues for Artificial Intelligence. Journal of Experimental & Theoretical Artificial Intelligence. <http://humanities.uchicago.edu/faculty/aydede/JETAI.MA&GG.pdf>
Barlow, H. (1992). The Social Role of Consciousness - Commentary on Bridgeman on Consciousness. Pscoloquy, 3(19) Consciousness (4). <http://www.cogsci.soton.ac.uk/psyc-bin/newpsy?article=3.19>
Bridgeman, B. (1992a). On the Evolution of Consciousness and Language, Pscoloquy, 3(15) Consciousness (1). <http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?3.15>
Bridgeman, B. (1992b). The Social Bootstrapping of Human Consciousness – Reply to Barlow on Bridgeman on Consciousness, Pscoloquy, 3(20) Consciousness (5). <http://www.cogsci.soton.ac.uk/psyc-bin/newpsy?article=3.20>
Burns, T. R.and Engdahl,
E. (1998). The Social Construction of Consciousness Part 2: Individual
Selves, Self-Awareness, and Reflectivity. Journal of Consciousness Studies,
2:166-
184.
<http://www.imprint.co.uk/jcs52.html#The
social construction of consciousness: Part 2:> (abstract)
Dennett, D. C. (1989) The Origin of Selves, Cogito, 3:163-173. <http://ase.tufts.edu/cogstud/papers/originss.htm>
Drescher, G. L. (1991). Made-up Minds, a constructivist approach to artificial intelligence. Cambridge, MA: MIT Press.
Edmonds, B. (1998). Social Embeddedness and Agent Development. UKMAS'98, Manchester, December 1998. <http://cfpm.org/cpmrep46.html>.
Edmonds, B. (1999). Capturing Social Embeddedness: a Constructivist Approach. Adaptive Behavior, 7(3/4), in press. <http://cfpm.org/cpmrep.html>
Edmonds, B. (2000). Towards Implementing Free-Will. AISB2000 Symposium on How to Design a Functioning Mind, Birmingham, April 2000. <http://cfpm.org/cpmrep57.html>
Edmonds, B. (2001). The Constructability of Artificial Intelligence, Journal of Logic, Language and Information, in press. <http://cfpm.org/cpmrep53.html>
Edmonds, B. and Dautenhahn, K. (1998). The Contribution of Society to the Construction of Individual Intelligence. Socially Situated Intelligence: a workshop held at SAB'98, August 1998, Zürich. <http://cfpm.org/cpmrep42.html>
Hoffman, J. (1993). Vorhersage und Erkenntnis [Anticipation and Cognition]. Goettingen, Germany: Hogrefe.
Gopnik, A. (1993) How we know our minds: The illusion of first-person knowledge of intentionality. Behavioural and Brain Sciences, 16:1-14.
Koza, J. R. (1992). Genetic Programming: the programming of computers by means of natural selection. Cambridge, MA: MIT press.
Kummer, H., Daston, L., Gigerenzer, G. and Silk, J. (1997). The social intelligence hypothesis. In Weingart et. al. (eds.), Human by Nature: between biology and the social sciences. Hillsdale, NJ: Lawrence Erlbaum Associates, 157-179.
Perlis, D. (1997). Consciousness as Self-Function, Journal of Consciousness Studies, 4: 509-525. <http://www.imprint.co.uk/jcs45-6.html#Consciousness as> (abstract)
Stolzmann, W., Butz, M. V., Hoffman, J. and Goldberg, D. E. (2000). First Cognitive Capabilities in the Anticipatory Classifier System. IlliGAL Report No. 2000008, Illinois Genetic Algorithms Laboratory, University of Illinois, Urbana, IL, USA. <ftp://ftp-illigal.ge.uiuc.edu/pub/papers/IlliGALs/2000007.ps.Z>
Turkle, S. (1984). The Second Self, computers and the human spirit. London: Granada.
Werner, E. (1999): The Ontogeny of the Social Self. Towards a Formal Computational Theory. In: Dautenhahn, K. (ed.) Human Cognition and Social Agent Technology, John Benjamins Publishing Company, 263-300.