Difference between revisions of "John Brignell"

From SourceWatch
Jump to navigation Jump to search
m (Reverted edit of 125.255.16.233, changed back to last version by Bob Burton)
 
(32 intermediate revisions by 9 users not shown)
Line 1: Line 1:
 
Professor '''John Brignell''' held the Chair in Industrial Instrumentation at University of Southampton (UK) from 1980 to the late 1990s. [http://www.ecs.soton.ac.uk/~jeb/cv.htm]
 
Professor '''John Brignell''' held the Chair in Industrial Instrumentation at University of Southampton (UK) from 1980 to the late 1990s. [http://www.ecs.soton.ac.uk/~jeb/cv.htm]
  
Brignell retired in the late 1990's from his academic career and now devotes his time to his interest in debunking the use of false statistics common in much of today's media. He presents his views on his website ''Numberwatch'', which was launched in July 2000, and is "devoted to the monitoring of the misleading numbers that rain down on us via the media. Whether they are generated by Single Issue Fanatics (SIFs), politicians, bureaucrats, quasi-scientists (junk, pseudo- or just bad), such numbers swamp the media, generating unnecessary alarm and panic. They are seized upon by media, hungry for eye-catching stories. There is a growing band of people whose livelihoods depend on creating and maintaining panic." [http://www.numberwatch.co.uk/number%20watch.htm]
+
Brignell retired in the late 1990's from his academic career and now devotes his time to his interest in debunking the use of what he claims to be false statistics common in much of today's media. He presents his views on his website ''Numberwatch'', which was launched in July 2000, and is "devoted to the monitoring of the misleading numbers that rain down on us via the media. Whether they are generated by Single Issue Fanatics (SIFs), politicians, bureaucrats, quasi-scientists (junk, pseudo- or just bad), such numbers swamp the media, generating unnecessary alarm and panic. They are seized upon by media, hungry for eye-catching stories. There is a growing band of people whose livelihoods depend on creating and maintaining panic." [http://www.numberwatch.co.uk/number%20watch.htm]
  
 
Brignell has expressed delight with the feedback from the "encouragement and support I have received from some of the giants of the pro-science movement in the USA -- in no particular order [[Steve Milloy]], [[Alan Caruba|Alan Coruba]] [''sic''], [[James Randi]], [[Bob Caroll]], [[Michael Fumento]] and [[S. Fred Singer]]." [http://www.numberwatch.co.uk/term%20end.htm].
 
Brignell has expressed delight with the feedback from the "encouragement and support I have received from some of the giants of the pro-science movement in the USA -- in no particular order [[Steve Milloy]], [[Alan Caruba|Alan Coruba]] [''sic''], [[James Randi]], [[Bob Caroll]], [[Michael Fumento]] and [[S. Fred Singer]]." [http://www.numberwatch.co.uk/term%20end.htm].
Line 22: Line 22:
  
 
A number of popular, politically correct theories are based on unsound mathematical evidence. In choosing to debunk such theories Brignell occasionally provokes the wrath of self interested supporters. He seems to enjoy this.
 
A number of popular, politically correct theories are based on unsound mathematical evidence. In choosing to debunk such theories Brignell occasionally provokes the wrath of self interested supporters. He seems to enjoy this.
 
== Statistical Significance (P<0.05) ==
 
 
Brignell suggests that one common source of error in experiments is the use of low levels of significance in statistical testing, particularly P<0.05.
 
 
If one applies a statistical test at a level of significance of 0.05, that means that one is accepting a probability of 0.05 of a false positive; that is, one is accepting that there is a chance of 1 in 20 that the result will appear significant when it isn't. Note that these odds apply to all tests carried out, not just the ones that return significant results. This has been called the 1 in 20 lottery.
 
 
This can cause problems when combined with publication bias. Studies that produce significant results tend to be published, ones that don't tend not to be. However, the number of false positives depends on the total number of studies carried out rather than the number published. In simple terms, if 1 in 2 studies produce significant results, then 50% of them will be published of which 10% will be bogus. If 1 in 5 studies produce significant results, then 20% of them will be published of which 25% will be bogus. If 1 in 10 studies produce significant results, then 10% of them will be published of which 50% will be bogus. How many studies produce significant results? What percentage of published studies are bogus? The numbers are simply not known.
 
 
Another source of problems occurs when this level of significance is used in combination with categorisation. This occurs when the data in a study is broken up into a number categories, and a statistical test is applied in each category. The effect is that of having multiple goes at the lottery: the more categories, the better the chance of producing a false positive. If the test is applied to 1 category, the odds of getting at least one bogus result are 1 in 20 (0.05). If there are 4 categories, the odds are nearly 1 in 5 (0.185). If there are 8 categories, the odds are about 1 in 3 (0.337). If there are 12 categories, the odds are about 9 in 20 (0.46). If there are 16 categories, the odds are about 5 in 9 (0.56). The odds continue to rise with the number of categories.
 
 
Combining these sources of error compounds them. If 100 studies are conducted at a significance level of 0.05, each categorising their data into 10 categories, and 50 of them are published, then 40 will be bogus and only 10 will have found any real significance.
 
 
If one hears that a study has been published that has "found a link between thyroid cancer and eating chocolate in women between the ages of 20 and 30 who eat more than three pieces of chocolate per day", how likely is it that the study is bogus? If one assumes that data was collected from a randomly selected sample of people, one could hypothesise that the data was categorised by sex (male, female), age (less than 20, 20 to 30, 30 to 40, over 40), and number of pieces eaten (none, 1 to 3, more than 3), for a total of 24 categories. This would suggest that the basic chance of the study being bogus is about 70% Incorporating publication bias might raise it to over 90%. This is of course assuming that the level of significance used is 0.05, a safe assumption in most such studies.
 
 
Brignell suggests that the use of a significance level of 0.05 is inherantly unsound. He suggests that "It is difficult to generalise, but on the whole P<0.01 would normally be considered significant and P<0.001 highly significant."
 
[http://http://www.numberwatch.co.uk/significance.htm]
 
 
Brignell suggests: "Many leading scientists and mathematicians today believe that the emphasis on significance testing is grossly overdone. P<0.05 had become an end in itself and the determinant of a successful outcome to an experiment, much to the detriment of the fundamental objective of science, which is to understand."
 
 
== Fitting Linear Trends ==
 
 
Brignell states "One of the major problems in using a finite sequence of data to represent a source that is effectively infinitely long is that the process of chopping off the ends is a distortion. [...] The other is the fact that, even when there is no linear trend in the original process, there is always one in the finite block of data taken to represent it."
 
 
Picture the sine wave y=sin(x). It is a continuous graph centred around the x axis. The x-values, are in radians, and the y-values range between 1 and -1. Lets take some samples from the graph.
 
 
*Sample 1: The points are (0.44,0.42), (0.52,0.50), (0.61,0.57), (0.70,0.64) and (0.79,0.71), and the fitted curve is y = 0.82 x + 0.07
 
*Sample 2: The points are (0.79,0.71), (0.87,0.77), (0.96,0.82), (1.05,0.87) and (1.13,0.91), and the fitted curve is y = 0.57 x + 0.26
 
*Sample 3: The points are (1.13,0.91), (1.22,0.94), (1.31,0.97), (1.40,0.98) and (1.48,1.00), and the fitted curve is  y = 0.26 x + 0.62
 
*Sample 4: The points are (1.48,1.00), (1.57,1.00), (1.66,1.00), (1.75,0.98) and (1.83,0.97), and the fitted curve is  y = -0.09 x + 1.13
 
*Sample 5: The points are (1.83,0.97), (1.92,0.94), (2.01,0.91), (2.09,0.87) and (2.18,0.82), and the fitted curve is  y = -0.42 x + 1.74
 
 
A sine wave is a continuous, cycling curve; it has no linear trend. If one were to try to fit a line of best fit to it, the best one could fit would be the x-axis, y=0. Yet none of these samples has given that line. The lines that they have given vary dramatically. It should be obvious that, depending on what data points are used, it is possible to get just about any line of best fit that one could desire. The linear trend found is there, not because of any underlying cause, but because it is generated by the act of selecting the data set.
 
 
As an alternative example, consider a set of points generated at random. There can obviously be no linear trend to such data. And yet, a line of best fit can be applied to any subset of these points, and a linear trend deduced from it. Again, the linear trend found is there, not because of any underlying cause, but because it is generated by the act of selecting the subset.
 
 
This example illustrates the "greatest hazard of trend estimation. The trend is a property of the data points we have and not of the original process from which they came. As we can never have an infinite number of readings there is always an error introduced by using a restricted number of data points to represent a process in the real world. Naturally the error decreases rapidly as we increase the number of data points, but it is always there and, in fact, the calculated trend is never zero".
 
 
It is not enough to fit a line to a set of points and declare a linear trend. One must understand the underlying data in order to know whether there actually is a real trend. It may simply be that the trend one sees is an artifact of the selection of data points, and doesn't actually exist in the underlying process.
 
 
== The End Effect ==
 
 
Brignell states "A major problem is the end effect, which relates to the huge changes in apparent slope that can be wrought just by the choice of where to start the data selection." "The reason for this is that in the calculation of the slope, the contribution of each data point is weighted according to its distance from the centre", so variation in the end points has much more effect on the final result than variation in the central points.
 
 
If we start with the nine points, (1,1), (2,2), (3,3), ..., (9,9), the slope of the fitted line is 1. (Obviously.) If we replace the points at (3,3) through (7,7) with (3,8), (4,8), (5,8), (6,8) and (7,8), the slope of the fitted line is 0.83. The change has flattened the line. If we instead replace the end point with (1,1) with  (1,4), the slope of the fitted line is 0.80. The effect on the slope of changing just the end point is greater than that of changing five central points.
 
 
Brignell offers the following as an example of the end point effect.
 
[http://www.numberwatch.co.uk/2003%20May.htm#wheeze]
 
The graph contains 15 data points displaying a strong linear trend. The removal of 2 points from either end removes the trend. So the trend is really only supported by those 4 data points.
 
 
== Trojan Numbers ==
 
 
The term Trojan number was coined by Brignell to describe a number used by authors to "get their articles or propaganda into the media." "The allusion is, of course, to the mythical stratagem whereby the Greeks infiltrated the city of Troy inside a giant wooden horse." The number looks impressive, but on further examination isn't.
 
  
 
== DDT ==
 
== DDT ==

Latest revision as of 11:07, 9 February 2006

Professor John Brignell held the Chair in Industrial Instrumentation at University of Southampton (UK) from 1980 to the late 1990s. [1]

Brignell retired in the late 1990's from his academic career and now devotes his time to his interest in debunking the use of what he claims to be false statistics common in much of today's media. He presents his views on his website Numberwatch, which was launched in July 2000, and is "devoted to the monitoring of the misleading numbers that rain down on us via the media. Whether they are generated by Single Issue Fanatics (SIFs), politicians, bureaucrats, quasi-scientists (junk, pseudo- or just bad), such numbers swamp the media, generating unnecessary alarm and panic. They are seized upon by media, hungry for eye-catching stories. There is a growing band of people whose livelihoods depend on creating and maintaining panic." [2]

Brignell has expressed delight with the feedback from the "encouragement and support I have received from some of the giants of the pro-science movement in the USA -- in no particular order Steve Milloy, Alan Coruba [sic], James Randi, Bob Caroll, Michael Fumento and S. Fred Singer." [3].

Brignell has self-published two books debunking the mathematics behind media scares, Sorry, wrong number! and The epidemiologists: Have they got scares for you!, under the name Brignell Associates.

Brignell's Point of View

Brignell is a trained mathematician and scientist who has spent much of his life working with statistics. His point of view can be summed up as follows:

  • A theory is only as good as the evidence that supports it. Theories supported by weak evidence should not be given much credence.
  • Evidence drawn from mathematics is only as sound as the underlying mathematics. This is particularly true in the field of statistics.
  • Most people are not qualified to judge the soundness of mathematics presented as evidence. They instead choose to rely on the testimony of experts. Many people who present themselves as experts are either themselves not qualified to judge, or are self interested.
  • The result is that much evidence is presented as sound when it is not. Consequently many theories are given a great deal more credence by most people than they deserve.

Brignell has made it his task to seek out media articles where conclusions are drawn from unsound mathematical evidence and debunk them on his website.

A number of popular, politically correct theories are based on unsound mathematical evidence. In choosing to debunk such theories Brignell occasionally provokes the wrath of self interested supporters. He seems to enjoy this.

DDT

On his website Brignell rails against what he calls the "deadly legacy of Rachel Carson" in curtailing the use of DDT. The Environnental Protection Agency, he wrote, "and its allies used their influence with international organisations to enforce the ban throughout the world. Some poor countries were actually blackmailed into banning it under threat of withdrawal of aid. As a result two and a half million people die of malaria every year, most of them poor children in Africa." [4]

Brignell claimed that the resurgence of malaria in Sri Lanka was a case in point. He claimed that Sri Lanka banned DDT in 1964 "under the influence of the sainted Rachel" whose book, Silent Spring, was published in 1962. Brignell notes that the number of cases of malaria has been reduced to 17 in 1963 before rising once more to 2.5 million at the end of the decade.

The author of the Deltoid blog, Tim Lambert, took issue with Brignell and others claims. "Now when you think about it, the story that they tell just isn't credible. If DDT spraying had almost eliminated malaria, and they got a new outbreak, then no environmentalists would be able to stop them from resuming spraying," he wrote. On investigating Lambert found that Sri Lanka did restart spraying with DDT but found that the target mosquito had grown resistant to DDT.

"So in 1977 they switched to the more expensive malathion and were able to reduce the number of cases to about 50,000 by 1980. In 2004, the number was down to 3,000, without using DDT," he wrote. "And the reason why they stopped spraying in 1964? It wasn't environmentalist pressure. With only 17 cases in 1963, they didn't think it was needed any more ... The anti-environmentalist version of what happened is a hoax. That doesn't mean that all the writers above were being deliberately misleading: they might be just repeating what another anti-environmentalist wrote and be unaware of the true story," he wrote. [5]

Brignell responded to Tim Lambert's blog entry as follows: "Tim Lambert supplies a reference that is certainly well worth reading, which suggests that DDT was abandoned rather than banned. It is possible that the Government in Sri Lanka, when it decided its budget priorities, was unaware of the international hype surrounding Rachel Carson at the time, but whatever the motivation the result was the same." [6]

Brignell and his ideological supporters believe that that the use of DDT, by reducing the number of deaths per year from malaria from 2.8 million in 1948 to 18 in 1963, had by 1970 saved about 56 million lives. Brignell states that the number of deaths due to malaria "makes The Holocaust look like a dress rehearsal." However, their argument willfully omits a number of important facts.

For starters, Rachel Carson herself was not opposed to all pesticide use. Prophetically, she worried that widespread agricultural use of pesticides would endanger efforts to control malaria, typhus and other diseases. In Silent Spring, she wrote, "No responsible person contends that insect-borne disease should be ignored. The question that has now urgently presented itself is whether it is either wise or responsible to attack the problem by methods that are rapidly making it worse. The world has heard much of the triumphant war against disease through the control of insect vectors of infection, but it has heard little of the other side of the story - the defeats, the short-lived triumphs that now strongly support the alarming view that the insect enemy has been made stronger by our efforts. Even worse, we may have destroyed our very means of fighting." Carson noted that the widespread use of DDT created selection pressure that led to the emergence of DDT-resistance mosquitoes and flies.

As science writer Laurie Garrett notes in her 1994 book, The Coming Plague: Newly Emerging Diseases In a World Out of Balance, the early success of efforts to control malaria contributed to the disease's later resurgence. The effort to control the disease was led by malariologist Paul Russell, who promised in 1956 that a multimillion-dollar effort could eliminate the disease by 1963: "Thus, in 1958 Russell's battle for malaria eradication began, backed directly by $23.3 million a year from Congress. Because Russell had been so adamant about the time frame, Congress stipulated that the funds would stop flowing in 1963. ... It was a staggering economic commitment, the equivalent of billions of dollars in 1990." By 1963, however, malaria "had indeed reached its nadir. But it had not been eliminated. ... But a deal's a deal. Russell promised success by 1963, and Congress was in no mood to entertain extending funds for another year, or two. As far as Congress was concerned, failure to reach eradication by 1963 simply meant it couldn't be done, in any time frame. And at the time virtually all the spare cash was American; without steady infusions of U.S. dollars, the effort tied abruptly." Worse yet,

Thanks to the near-eradication effort, hundreds of millions of people now lacked immunity to the disease, but lived in areas where the Anopheles [mosquitos that carry the disease] would undoubtedly return. Pulling the plug abruptly on their control programs virtually guaranteed future surges in malaria deaths, particularly in poor countries lacking their own disease control infrastructures. As malaria relentlessly increased again after 1963, developing countries were forced to commit ever-larger amounts of scarce public health dollars to the problem. India, for example, dedicated over a third of its entire health budget to malaria control. ...
At the very time malaria control efforts were splintering or collapsing, the agricultural use of DDT and its sister compounds was soaring. Almost overnight resistant mosquito populations appeared all over the world. ... By the time the smallpox campaign was approaching victory in 1975, parasite resistance to chloroquine and mosquito resistance to DDT and other pesticides were so widespread that nobody spoke of eliminating malaria. (The Coming Plague, pp. 46-52)

Counting the dead

In the discussion that followed a study published in The Lancet estimating the number of people killed in Iraq since the invasion, Brignell wrote that the use of a relative risk figure of 1.5 was inappropriate. "A relative risk of 1.5 is not acceptable as significant," he wrote. [7] Brignell argues that increases of risk less than 100% should be ignored.

However, as Tim Lambert pointed out, the increased risk is statistically significant. "You won't find support for Brignell's claim in any conventional statistical text or paper. To support his claim he cites a book called Sorry, wrong number!. Trouble is, that book was written by? John Brignell. Not only that, it was published by? John Brignell," he wrote. It should be noted that Brignell does provide a number of supporting statements from recognised authorities. [8]

Lambert suggests a thought experiment:

Suppose we had perfect records of every death in Iraq and there were 200,000 in the year before the invasion, and 300,000 in the year after. Then the relative risk would be 1.5 and Brignell would dismiss the increase as not significant even though in this case we have absolutely certainty that there were 100,000 extra deaths. [9]

This is what's called a "straw man" argument, where you make an unrelated claim and then attack it as though it were the original claim. The statistics in these surveys are not drawn from hundreds of thousands of results, but from dozens, and the samples are not "perfect records", they are samples from populations, which means they are randomly distributed. So the "thought experiment" is obviously invalid.

In fact, the journal Science published in 1995 a list of published risks for cancer from the previous 8 years; among those were

  • Smoking more than 100 cigarettes in a lifetime--rr 1.2 for breast cancer (February 1990)
  • Lengthy occupational exposure to dioxin--rr 1.5 for all cancers (January 1991)
  • Regular use of high-alcohol mouthwash--rr 1.5 for mouth cancer (June 1991)
  • Use of phenoxy herbicides on lawns--rr 1.3 for malignant lymphoma in dogs (September 1991)
  • Weighing 3.6 kilograms or more at birth--rr 1.3 for breast cancer (October 1992)
  • Occupational exposure to electromagnetic fields--rr 1.38 for breast cancer (June 1994)
  • Ever having used a sun lamp--rr 1.3 for melanoma (November 1994)
  • Abortion--rr 1.5 for breast cancer (November 1994)
  • Consuming olive oil only once a day or less--rr 1.25 for breast cancer (January 1995)

(Sizing Up the Cancer Risks, Science 269, p. 165, 1995). Although the validity of some of these relationships is still controversial, it is clear that "A relative risk of 1.5 is not acceptable as significant," is not true as far as scientific publication is concerned.

Publication, of course, is not the same as being significant. The above example supports Professor Brignell's demonstration that publication bias causes a relative risk that can be approximated to 1.6.

Attention should also be drawn to the publication of studies which produce contradictory results. For example, the Nurse's Health Study found that the use of hormone treatments reduced the risk of cancer (relative risk 1.3), whereas the Women's Health Initiative found that the same hormone treatments increased the risk of cancer (relative risk 1.4). (Gina Kolata, Hormone Studies: What Went Wrong?, The New York Times, 22 April 2003). These results are not due to faults in the studies themselves, but rather to the acceptance of low levels of relative risk as significant.

The hole in the ozone layer

In another article Brignell complained about restrictions imposed in the U.K. on people being able to dump old refrigerators due to concerns about the release of ozone depeleting gases. "It is now illegal to dispose of both the coolant and the insulant in fridges, but in Britain there is no legal way of doing it. All because of a hole in the ozone layer that was probably always there and an unproven theory as to how it was caused," he wrote. [10]

Once more Lambert challenged Brignell's claim and cited Antarctic scientific data. "It is perfectly clear that the hole was not always there. There is not one scrap of evidence to support Brignell's claim. Yet even when confronted with the evidence that proves his claim is false he continues to maintain that it is true," Lambert wrote. [11]

Second Hand Smoke

Bob Carroll (author of the Skeptic's Dictionary) initially accepted Brignell's argument against the EPA's finding that second hand smoke caused lung cancer. However, he changed his mind when he found that the "scientific principle" (relative risk less than 2) that Brignell used to reject the finding was not recognized by epidemiologists, just tobacco companies.

Books

  • John Brignell, Sorry, wrong number!, Brignell Associates, September 2000. ISBN 0-9539108-0-6
  • John Brignell, The epidemiologists:Have they got scares for you!, Brignell Associates, July 2004. ISBN 0-9539108-2-2.

Contact details

Department of Electronics & Computer Science,
University of Southampton,
Highfield, Southampton SO17 1BJ
Telephone: (01703) 594450
FAX: (01703) 592901
Email: jeb AT numberwatch.co.uk
Web: http://www.numberwatch.co.uk

External links