[Next] [Previous] [Top] [Contents]

Modelling Bounded Rationality using Evolutionary Techniques - Bruce Edmonds

7 An Application - a model of utility learning

A simple application of the above approach is that of an economic agent that seeks to maximise its utility by dividing its spending of a fixed budget between two goods in each time period (what it does not spend on one good it spends on the other).

Unlike classical economic agents, this one does not know its utility function (even its form) but tries to induce it from past experience. It only gets information about the utility of a particular spending pattern by actually trying it. The agent wants to get the most utility from its spending. It will not speculate with alternative spending patterns merely to learn more about the utility curve.

To do this it attempts to model its utility with a function represented by a GP type chromosome using +, -, *, /, max, min, log, exp, average, "cutbetween" (a three-argument function which takes the second value if the first value is less than 1 and the third value thereafter) as branching nodes, and a selection of random constants and variables representing the amounts bought of the two products for the leaves. Thus the chromosome



[amountBoughtOf 'product-2']

[constant 2.3]]

[constant 0.5]]

would predict that the utility gained would be


Where x is the amount spent of product 2.

The fitness function is based on the RMS error of the prediction of a model compared to the actual utility gained over past spending actions. This is modified by a slight parsimony pressure in favour of shallower chromosomes and a bias in favour of chromosomes which mentioned more distinct variables based on the among bought (a rough measure of specificity - called "volume" in [13]). Only the fittest half of the population is retained each generation, so that this is a kind of selective breeding algorithm and does not use fitness proportionate random selection (thus is has some similarities to evolutionary programming [6]).

Each time period the agent:

  1. carries over its previous functional models;

  2. produces some new ones by either combining the previous models with a new operator or by growing a small new random one;

  3. evaluates its current models using past data;

  4. selects the best models in terms of fitness for survival;

  5. it finds the fittest such model;

  6. it then preforms a limited binary search on this model to find a reasonable spending pattern in terms of increasing its utility;

  7. finally it takes that action and observers its resulting utility.

This model was realised in a language called SDML (Strictly Declarative Modelling Language) - a language that has been specifically developed in-house for this type of modelling. This is a declarative object-oriented language with features that are optimized for the modelling of such economic, business and organisational agents [9, 20].

Limiting the depth of the models created to 10, We preformed 10 runs over 100 time periods for each type of agent. The three types were characterised by the memory they were given and the number of new models they created each time period: respectively 10, 20 and 30. We call these 10-memory, 20-memory and 30-memory agents, they represent agents with different bounds on their rationality. The results were then averaged over these 10 runs.

The first graph shows the (RMS) error of the agent's best model of the utility function compared with the actual function (figure 2). It shows a great improvement between the 10-memory agent's and 20-memory agents, but only a marginal improvement between 20 and 30-memory agent's, suggesting the existence of a sort of minimum capacity for this task.

Figure 2: Error in Agent's Best Model for Different Memories, Averaged Over 10 Runs

When you look at the utilities achieved by the agents with different memories (figure 3), you see that a memory capacity (above 10) does not significantly increase the average utility over time, but it does dramatically effect the reliability of the utility it gains. If this were a firm with the utility being its profits, this reliability would almost as important as its average profit level.

Figure 3: Utility Ratio Achieved for Agents with Different Memories, Averaged over 10 Runs

To give a flavour of the sort of models these agents develop, in run 1 of the 30-memory agent batch the agent achieved the following model by date 75:



[[add [[constant 1.117] [amountBoughtOf 'product-2']]]

[average [[amountBoughtOf 'product-2'] [constant 4.773]]]]]


[[amountBoughtOf 'product-2']


[[average [[amountBoughtOf 'product-2'] [constant 4.773]]]

[constant 1.044]

[add [[constant 1.117] [amountBoughtOf 'product-2']]]]]]]]].

The extent of the fit learnt by the agent is shown in figure 4.

Figure 4: Learnt vs. Actual Utility Functions, Run 1 of 30-memory Agents

The purpose of this simulation is not to be an efficient maximiser of utility, but to model economic agents in a more credible way. It will only be vindicated (or otherwise) when compared to real economic data. However, the model does show traits found in the real world. For example, one phenomenon that is observed is that agents sometimes get "locked" into inferior models for a considerable length of time (as in [2]) - the model implies an inferior course of action, but this course of action is such that the agent never receives disconformation of its model. Thus this remains its best model in terms of the limited data it has, so it repeats that action. If, for example, some consumers find a satisfactory brand at an early stage in the development of their tastes and then they may never try any others - their (limited) experience will never disconfirm their model of what would give them most satisfaction, even when they would like other brands better.

Other related applications have included a model of intelligent price fixing in Cournot Duopoly tournaments [12], and a model of emerging markets where the agents are simultaneously building models of the economy they inhabit (and mutually create) [14].

Modelling Bounded Rationality using Evolutionary Techniques - Bruce Edmonds - 09 JUN 97
[Next] [Previous] [Top] [Contents]

Generated with CERN WebMaker