Changes

Jump to navigation Jump to search

John Brignell

6,175 bytes added, 01:55, 30 November 2005
no edit summary
Brignell's comments on the extreme value fallacy can be found here.
[http://www.numberwatch.co.uk/extreme_value_fallacy.htm]
 
== Computer Modeling ==
 
Brignell states Computer Modeling "is one of the most powerful tools available to science and engineering and, like all powerful tools, it brings dangers as well as benefits. Andrew Donald Booth said 'Every system is its own best analogue'. As a scientist he should, of course, have said in what sense he means best. The statement is true in terms of accuracy but not in terms of utility. If you want to determine the optimum shape for the members of a bridge structure, for example, you cannot build half a dozen bridges and test them to destruction, but you can try large numbers of variations in a computer model. Computers allow us to optimise designs in ways that were unavailable in times past. Nevertheless, the very flexibility of a computer program, the ease with which a glib algorithm can be implemented with a few lines of code and the difficulty of fully understanding its implications can pave the path to Cloud Cuckoo Land."
 
Brignell identifies the main hazards of computer models as follows:
*Assumptions: "At almost every stage in the development of a model it is necessary to make assumptions, perhaps hundreds of them. These might or might not be considered reasonable by others in the field, but they rapidly become hidden. Some of the larger models of recent times deal with the interactions of variables whose very nature is virtually unknown to science."
*Auditability: "In olden times, if a scientist published a theory, all the stages of reasoning that led to it could be critically examined by other scientists. With a computer model, it is possible, within a few days of development, for it to become so complex that it is a virtual impossibility for an outsider to understand it fully. Indeed, where it is the result of a team effort, it becomes unlikely that any individual understands it."
*Omissions: "Often vital elements can be left out of a model and the effect of the omissions is only realised if and when it is tested against reality. A notorious example is the Millennium Bridge in London. It was only after it was built and people started to walk on it that the engineers realised that they had created a resonant structure. This could have been modelled dynamically if they had thought about it. Some models that produce profound political and economic consequences have never faced such a challenge."
*Subconscious: "The human subconscious is a powerful force. Even in relatively simple physical measurements it has been shown that the results can be affected by the desires and expectations of the experimenter. In a large computer model this effect can be multiplied a thousandfold. Naturally, we discount the possibility of deliberate fraud."
*Sophistication: "This word, which literally means falsification or adulteration, has come to mean advanced and efficient. In large computer models, however, the literal meaning is often more applicable. The structure simply becomes too large and complex for the inputs that support it."
*Testability: "When we were pioneering the applications of computer modelling about forty years ago, we soon came to the conclusion that a model is useless unless it can be tested against reality. If a model gives a reasonably accurate prediction on a simple system then we have reasonable, but not irrefutable, grounds for believing it to be accurate in other circumstances. Unfortunately, this is one of the truisms that have been lost in the enthusiasms of the new age."
*Chaos: "Large models are often chaotic, which means that very small changes in the input variables produce very large changes in the output variables." Errors present in the input variables are magnified in the output. If feedback mechanisms are present, "it is quite possible for systems to operate on the noise alone."
 
Brignell comments: "Many of the computer models that receive great media coverage and political endorsement fail under some of these headings; and, indeed, some fail under all of them. Yet they are used as the excuse for profound, and often extremely damaging, policies that affect everyone. That is why computer models are dangerous tools."
 
Brignell's comments on computer models can be found here.
[http://www.numberwatch.co.uk/computer_modelling.htm]
 
Another problem seen in some models is the presentation of input assumptions as predictions of the model. The researcher produces a theory and wishes to test it. He crafts a model to implement his theory, and uses it to make predictions. He then looks to see if the predictions match the real world situation. If they do, they support the accuracy of his model; if the model proves accurate, it in turn provides support for his theory. If the predictions are wrong, the researcher changes his model and tries again. If the model cannot be made to produce accurate predictions, he must look for other ways to support his theory. The predictions of the theory are not predictions of the model, rather they are design constraints of it. If the model does not produce results that meet these design constraints, then it is wrong and must be changed so that it does.
 
A classic example of this lies in models used to support theories about the long term effects of global warming. A researcher theorises that one such effect will be a rise in sea level. He then attempts to support this theory by modelling the prediction that there will be a rise, using it to produce data which can be tested against real world data. If there is agreement, then he can use the accuracy of his model as evidence to support his theory. If there isn't, he must change his model in an effort to find such agreement. If he cannot make the model agree he must ultimately abandon his theory. The model shows that the sea level will rise; it was written to do so; the fact that it does so is not of itself evidence that the sea level will indeed rise!
 
There is a simple way to ascertain whether a prediction is of the theory or of the model, and that is to ask "If the model doesn't support the prediction will the researcher change the model so that it does?" If the answer is no, the prediction came from the model, if yes, from the researcher.
== DDT ==
Anonymous user

Navigation menu