[Next] [Previous] [Top] [Contents]

Modelling Socially Intelligent Agents - Bruce Edmonds

Modelling Agents


At the Centre for Policy Modelling (CPM) we are interested in modelling real agents, which can be people or other institutional units (such as firms or departments). We do this by modelling these as intelligent software agents. This perspective means we have different concerns than those concerned with designing agents or robots to meet particular goals - what might be called an engineering perspective. In particular we seek veracity over efficiency. We do not claim that the agent architectures and techniques described here result in agents, or groups of agents, that are particularly good at any specific task. We do claim that these techniques result in communities of agents that exhibit behaviours that characterise boundedly rational socially intelligent agents, i.e. are a step towards modelling some key aspects of humans in social and organisational settings.

One corollary of this is that we do not model using reactive agents, since a principal concern of ours is the nature and development of the agents' internal models as they interact with other agents and its environment (see also the reasons in the section entitled Social Intelligence). The purpose of modelling these agents is to discover the emergent behaviour. If we closely specified the agents behaviour (for example by compiling it down into a reactive architecture) we would be needlessly delimiting the behaviour that might result, which would result in us being less informed about the possibilities inherent in a multi-agent situation.

This leaves open the question of the validation and verification of these models - if you constrain them as little as possible, how do you know if (and how) they correspond to reality in any practical way. The answer we have developed is twofold. Firstly, to validate the mechanics of your model by separating out clearly the implementation details from any implicit theory of cognition. [5]. We do this by specifying the implementation it in a language with clearly known semantics (in our case a declarative language), and by basing the agents cognition in a known process or cognitive theory. Secondly by verifying the output of the model against real world qualities or data. This issue is discussed in greater detail in [16].

We take the strategy of explicitly representing the agents internal models in a specified language - usually of a quasi-logical or functional variety. This explicit representation makes it possible to limit, examine and analyse the agents' models, as they develop. We are not claiming that humans use such a representation (this is a hotly debated issue) but merely that by having such inspectable and comprehensible models we can easily find out the state of the agent at any time. We find that by providing agents with a suitably expressive internal modelling language and allowing them to develop their own models, we do not introduce any obviously inappropriate behaviour into our agents. The final test of the appropriateness of an agent's behaviour is domain dependent and only verifiable with respect to known properties of what is being modelled.

The agents we model have distinct limitations of resources - they are boundedly rational in several respects. They have limited memory, a limit on searches for improved models and a limit of their ability to make inferences from their models. Following what is known about real agents we ensure that their search for new models is incremental, rather than global in nature. The limitations on the current memory cache, especially the agent's stock of candidate models, encodes a sharp path-dependency. The nature and framework of this is described in [7].

One particular technique we use is an adaption of the genetic programming (GP) paradigm [10]. Here the internal models belonging to the agents are held as a set of tree-based expressions. The selection among these is based upon their past predictive success or some endorsement-based mechanism. However we do not always use the cross-over operator as this implements a fairly global search process with is unrealistic for our purposes. Instead we prefer other mixes of operators with a bias away from a lot of crossover, and including other operators that are not used in GP at all such as generalisation and specialisation. For a more detailed discussion of this see [7].


Modelling Socially Intelligent Agents - Bruce Edmonds - 17 DEC 97
[Next] [Previous] [Top] [Contents]

Generated with CERN WebMaker