Changes

Jump to navigation Jump to search

John Brignell

3,789 bytes removed, 11:07, 9 February 2006
m
Reverted edit of 125.255.16.233, changed back to last version by Bob Burton
Professor '''John Brignell''' held the Chair in Industrial Instrumentation at University of Southampton (UK) from 1980 to the late 1990s. [http://www.ecs.soton.ac.uk/~jeb/cv.htm]
Brignell retired in the late 1990's from his academic career and now devotes his time to his interest in debunking the use of what he claims to be false statistics common in much of today's media. He presents his views on his website ''Numberwatch'', which was launched in July 2000, and is "devoted to the monitoring of the misleading numbers that rain down on us via the media. Whether they are generated by Single Issue Fanatics (SIFs), politicians, bureaucrats, quasi-scientists (junk, pseudo- or just bad), such numbers swamp the media, generating unnecessary alarm and panic. They are seized upon by media, hungry for eye-catching stories. There is a growing band of people whose livelihoods depend on creating and maintaining panic." [http://www.numberwatch.co.uk/number%20watch.htm]
Brignell has expressed delight with the feedback from the "encouragement and support I have received from some of the giants of the pro-science movement in the USA -- in no particular order [[Steve Milloy]], [[Alan Caruba|Alan Coruba]] [''sic''], [[James Randi]], [[Bob Caroll]], [[Michael Fumento]] and [[S. Fred Singer]]." [http://www.numberwatch.co.uk/term%20end.htm].
A number of popular, politically correct theories are based on unsound mathematical evidence. In choosing to debunk such theories Brignell occasionally provokes the wrath of self interested supporters. He seems to enjoy this.
 
== Statistical Significance (P<0.05) ==
 
Brignell suggests that one common source of error in experiments is the use of low levels of significance in statistical testing, particularly P<0.05.
 
If one applies a statistical test at a level of significance of 0.05, that means that one is accepting a probability of 0.05 of a false positive; that is, one is accepting that there is a chance of 1 in 20 that the result will appear significant when it isn't. Note that these odds apply to all tests carried out, not just the ones that return significant results. This has been called the 1 in 20 lottery.
 
This can cause problems when combined with publication bias. Studies that produce significant results tend to be published, ones that don't tend not to be. However, the number of false positives depends on the total number of studies carried out rather than the number published. In simple terms, if 1 in 2 studies produce significant results, then 50% of them will be published of which 10% will be bogus. If 1 in 5 studies produce significant results, then 20% of them will be published of which 25% will be bogus. If 1 in 10 studies produce significant results, then 10% of them will be published of which 50% will be bogus. How many studies produce significant results? What percentage of published studies are bogus? The numbers are simply not known.
 
Another source of problems occurs when this level of significance is used in combination with categorisation. This occurs when the data in a study is broken up into a number categories, and a statistical test is applied in each category. The effect is that of having multiple goes at the lottery: the more categories, the better the chance of producing a false positive. If the test is applied to 1 category, the odds of getting at least one bogus result are 1 in 20 (0.05). If there are 4 categories, the odds are nearly 1 in 5 (0.185). If there are 8 categories, the odds are about 1 in 3 (0.337). If there are 12 categories, the odds are about 9 in 20 (0.46). If there are 16 categories, the odds are about 5 in 9 (0.56). The odds continue to rise with the number of categories.
 
Combining these sources of error compounds them. If 100 studies are conducted at a significance level of 0.05, each categorising their data into 10 categories, and 50 of them are published, then 40 will be bogus and only 10 will have found any real significance.
 
If one hears that a study has been published that has "found a link between thyroid cancer and eating chocolate in women between the ages of 20 and 30 who eat more than three pieces of chocolate per day", how likely is it that the study is bogus? If one assumes that data was collected from a randomly selected sample of people, one could hypothesise that the data was categorised by sex (male, female), age (less than 20, 20 to 30, 30 to 40, over 40), and number of pieces eaten (none, 1 to 3, more than 3), for a total of 24 categories. This would suggest that the basic chance of the study being bogus is about 70% Incorporating publication bias might raise it to over 90%. This is of course assuming that the level of significance used is 0.05, a safe assumption in most such studies.
 
Brignell suggests that the use of a significance level of 0.05 is inherantly unsound. He suggests that "It is difficult to generalise, but on the whole P<0.01 would normally be considered significant and P<0.001 highly significant."
[http://http://www.numberwatch.co.uk/significance.htm]
 
Brignell suggests: "Many leading scientists and mathematicians today believe that the emphasis on significance testing is grossly overdone. P<0.05 had become an end in itself and the determinant of a successful outcome to an experiment, much to the detriment of the fundamental objective of science, which is to understand."
== DDT ==
developer, editor
60,576

edits

Navigation menu