[Next] [Previous] [Top] [Contents]

Complexity and Scientific Modelling

7 Complexity vs. Information


The above framework distinguishes between the complexity of the model form and its specificity. The specificity of a model has been characterised in many ways, including: the information a model provides, the system's entropy, and the model's refutability.

Such measures of specificity have often been linked to a model's simplicity, where by simplicity we mean that property of a model which makes it more likely to be true than another, given that they have equal evidential support. This property is called "simplicity", because it is traced back to the principle of parsimony attributed to William of Occam. Thus Popper characterises simplicity as a model's refutability [11], while Sober has associated it with the minimum extra information to answer a given question [14]. This tradition has been continued by several authors who have used various measures of information to capture it including Shannon information and algorithmic information*1. It is clear that such simplicity is not necessarily the opposite of complexity, as described above (see section 8).

That complexity is not rigidly linked to the specificity of a model can be shown by considering any modelling language which has terms explicitly denoting nonspecificity (frequently called "error terms"). Clearly, the introduction of such terms can make an expression simultaneously more complex and less specific.

This is not to say that there might not be good reasons to prefer a model which is more specific, just that it is neither directly linked to either a model's complexity or its error rate. Rissanen makes a case for a particular trade-off between the specificity of a model and its complexity - namely that one should seek the size of the minimal description which includes the model and the deviations from the model.


Complexity and Scientific Modelling - 04 APR 97
[Next] [Previous] [Top] [Contents]

Generated with CERN WebMaker