[Next] [Previous] [Top] [Contents]

Social Embeddedness and Agent Development - Bruce Edmonds

1 Introduction - Engineering and Social Simulation Goals

Entities that are meaningfully described as artificial `agents' may be used by humans for many different purposes. Some of these purposes can be grouped by the abstract goals they are designed to fulfil. Two such goals are: to construct systems that meet certain performance criteria in a reliable way, which I will call the `engineering' perspective and to act as models of social agents so to increase our understanding of them, which I will call the `social simulation' perspective. Both of theses goals are valid but will entail some differences in their methods.

The `engineering' perspective generally means that the performance criteria come first (in any particular loop of the design cycle), then systems are constructed to meet those criteria. This is usually done with an eye to a range of such systems, in which case more general methods are developed so that particular systems can be reliably produced as and when they are needed. Using agents as the essential components of such a process was first suggested by Shoham [17], this follows a general trend in software engineering towards the use of increased abstraction [19]. A critical element in engineering systems is that the results must be reliable, because people want to use them as a component tool in the execution of their plans. Two of the chief ways in which such systems are made reliable is via predictability and transparency. That is, the results of ones design decisions must be predictable, so that you can work out the consequences of your actions before you actually use the system and the nature of ones design decisions must be fairly clear, that is there must be a way to guess at design decisions without a full computational prediction of the results for every step.

In contrast to the above, the `social simulation' perspective one may start with a specification of the agent's mechanisms and structure and then observe the resultant emergence in behaviour and end-result. The researcher often uses the simulation to explore the possible behavioural outcomes. The interest in such simulations is often precisely because the resultant behaviour is surprising, in other words that it is not transparent. Frequently the results of such simulations are not even predictable. For this reason results and methods of researchers of these two perspectives are sometimes mirror-images of each other (in the sense of being opposite) - social simulators are often aiming to create exactly the type of situation that software engineers are trying to prevent.

This paper aims to characterise a feature of such systems of inter-acting agents that distinguishes between the two approaches, namely social embeddedness. It is argued that this is an essential feature of societies as we know them and has practical consequences for the agents that inhabit them. For this reason it is suggested that such embeddedness will need to be a feature of many social simulation models. A consequence of social embeddedness is that it may not be practically possible for its component agents to be designed using the `engineering' perspective (as usually conceived of at present). Of course there are many areas of overlap between these two perspectives in terms of methodologies, tools and ideas and in real-life different perspectives may be taken at different times and for different aspects of a project but I do not have room to consider these here. Neither am I attacking in any way the legitimate development of techniques and methodologies for engineering systems out of agents, but merely pointing out that fully social agents might not sit well in such a project.

Social Embeddedness and Agent Development - Bruce Edmonds - 30 OCT 98
[Next] [Previous] [Top] [Contents]

Generated with CERN WebMaker