[Next] [Previous] [Up] [Top] [Contents]

Example 2 - Extending the `El Farol Bar' Model

Issues and Interpretations


There are some very interesting problems that arise when we try to interpret what occurred. Even given that we can look inside the agents' `heads' one comes across some of the same problems that philosophers, psychologists and social scientists encounter in trying to account for human communication. The web of cause and effect can be so complex as to impede a straightforward analysis - just as seems to occur in the human case.

One issue in particular is the question of the "meaning" of the agent's utterances to each other. Their utterances do have a meaning to each other otherwise they would quickly select out action genes that included "saidBy" clauses. However, these meanings are not obvious. They are not completely determined by their own model structures, but can involve a number of language games whose ultimate grounding is to the practice of such communication in relation to actual decisions. It may be that an approach which uses a Wittgensteinian approach, [21], and describing the state of affairs in terms of `speech acts' [19] may make for a simpler and more appropriate model of the situation that a traditional AI belief and inference model.

In this particular example it seems that the pragmatics of the situation are the most important for determining meaning, followed by a semantics grounded in the effects of their actions, leaving the syntax to merely distinguish between the two possible messages. This case seems to illustrate Peter Gärdenfors observation about human language:

"Action is primary, pragmatics consists of the rules for linguistic actions, semantics is conventionalised pragmatics and syntax adds markers to help disambiguation (when context does not suffice)."


Modelling Socially Intelligent Agents - Bruce Edmonds - 17 DEC 97
[Next] [Previous] [Up] [Top] [Contents]

Generated with CERN WebMaker