5.3 Example 2 - Communication, Learning and the El Farol Bar Problem
Brian Arthur modelled this by randomly giving each agent a fixed menu of potentially suitable models to predict the number who will go given past data (e.g. the same as two weeks ago, the average of the last 3 weeks, or 90 minus the number who went last time). Each week each agent evaluates these models against the past data and chooses the one that was the best predictor on this data and then uses this to predict the number who will go this time. It will go if this prediction is less than 60 and not if it is more than 60.
As a result the number who go to the bar oscillates in an apparently random manner around the critical 60% mark (like figure 13), but this is not due to any single pattern of behaviour - different groups of agents swap their preferred model of the process all the time. Although each agent is applying a different model at any one time chosen from a different menu of models, with varying degrees of success, when viewed globally they seem pretty indistinguishable, in that they all regularly swap their preferred model and join with different sets of other agents in going or not. None takes up any particular strategy for any length of time or adopts any identifiably characteristic role.
Viewed globally the agents in this model appear to be acting stochastically and homogeneously, despite the fact that the whole system is completely deterministic*1 and each agent is initialised with a different repetoire of models. Zambrano [31] has interpreted this by saying that agents in this simulation are acting, en masse, as if they were using the mixed strategy predicted by game theory as the Nash equilibrium (namely choose a random number between 0 and 100 and go if it is 60 or below). That this is not the case can be established by looking at the variation in the agents behaviour as the simulation size increases - if they were acting collectively as if they were using such a mixed strategy the standard deviation of their attendance would decrease markedly as a proportion of the total size as that size increased (the SD would be
, where n was the simulation size). This is not the case, as the results gained from runs of Arthur's Model show. I re-ran the model 24 times over 500 dates for each of the following sized populations: 10, 18, 31, 56, 100, 180, 310, 560, 1000, 1800, 3100, 5600, 10000, 18000, 31000 and 100000 (with different initial histories and model selections for agents each time). In figure 10 we clearly see that the spread of levels of attendance is retained at large populations, suggesting that some sort of globally coupled chaos is occurring and not (predominately) a stochastic process. See [16] for the identification and exploration of such systems.
Figure 10: Scaled Spread of Attendances against Population Size*2
Also it is noticable that in this model,although the agent population is not significantly biased in its predictions when averaged, the individual agent's predictions were not converging to the `truth' because the trend in the spread of their prediction error's did not reduce (as a proportion of the population size) with larger populations (figure 11). Thus in an important respect these agents were not acting in aggregate as if they each had the essentially correct model of their economy. Thus Arthur's model goes beyond mainline economic models; we shall see that if we further extend it with evolutionary learning and communication that other such traits will emerge, such as heterogeneity.
Figure 11: The Spread of the Agents Predictive Errors vs. Theory
Modelling Bounded Rationality In Agent-Based Simulations using the Evolution of Mental Models - 17 MAR 98
[Next] [Previous] [Up] [Top] [Contents]
Generated with CERN WebMaker