Complexity and Scientific Modelling
Let us imagine we are faced with an unacceptable level of error in the predictions of our best current model. What could be done?
Firstly, we could search for more accurate models by widening the search to look at other equally precise models. It is sensible to try the easier models first, but if we exhaust all the models at our current level of complexity we will be forced to try more complex ones. In this case we are effectively discounting the case that the unexplained elements of the data are unpredictable and treating noise as what is merely currently unexplained due to its complexity. This is a view taken in the light of many chaotic processes which can produce data indistinguishable from purely random data.
Secondly, we could decide to look for models that were less specific. This might allow us to find a model that was not significantly more complex but had a lower level of predictive error. Here we are essentially filtering out some of the data, attributing it to some irrelevant source. This might correspond to a situation where you know that there is an essentially random source of noise that has been imposed upon the data. This is the traditional approach, used in a wide range of fields from electronics to economics.
Thirdly, and most radically, we could seek to change our language of modelling to one that we felt was more appropriate to the data. Here we have noise as the literally indescribable. for example, sometimes a neural network (NN) is set up so that extreme fluctuations in the data are not exactly capturable by the range of functions the NN can output. In this way the NN is forced to approximate the training data and overfitting is avoided.
Thus randomness may be a sufficient characterisation of noise but it is not a necessary one.
Generated with CERN WebMaker