Significant probability to be different from the expected

From
Revision as of 21:24, 25 March 2023 by Bosmana fem (talk | contribs) (Is this difference significant?)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

If we want to know if the results of a measurement are significantly different from the results that we should expect, then we first need to determine what we expect, and then define what we call significantly different.

What do we expect?

In our fictitious example of an outbreak of botulism among people eating in a restaurant, we want to investigate if eating home preserved green olives is significantly associated with developing botulism. Suppose we have performed a cohort study among 135 guests that ate in the restaurant where the outbreak of botulism occurred. Then the following 2x2 table shows the results of occurrence of botulism among the guests, for the exposed and the unexposed group:


Botulism outbreak in Restaurant X
Ill Not Ill
Ate Olives 9 43 52
Did not eat olives 4 79 83
13 122 135

How will we know what the probability is to find these results by chance / coincidence (so if really the null hypothesis is true and there is no association in reality)?

The first step is to determine what results we expected: we know that 13 (9.6%) of the 135 restaurant guests developed botulism. So if the olives are not the cause of the outbreak, then we expect that the occurrence of botulism will be the same (9.6%) among those who ate olives and those who did not eat olives. This means that we actually should expect the following table:

Expected occurrence of botulism in Restaurant X
Ill not Ill Total
Ate Olives 5 47 52
Did Not Eat Olives 8 75 83
13 122 135


The next thing we need to do is to quantify the difference between the observed results in our study and the expected values. For this we need the chi-square value: for each cell the expected number is subtracted from the observed number, this difference is squared and then divided by the expected number. The chi-square then sums the result for all cells. The formula is as follows:

χ² = ∑ {(observed num. - expected num.)² / expected num.} In our example, the χ² = 5.73

What does this mean? The larger χ² is, the more the observed data deviate from the assumption of independence (no effect). Intuitively we assume that the larger the chi-square value, the lower the probability that our results differ due to chance and that our null hypothesis is not true. In that case we have evidence against H0 so that it can be rejected in favor of the alternative hypothesis. All we need to do now is to quantify the probability (p-value) that this chi-square value we observe is due to chance.

Is this difference significant?

In our example the p-value that corresponds to a χ² = 5.73 with one degree of freedom is 1.6% (or: p=0.016). That sounds small, but is it small enough to be significant? Well, significance is a convention. And the most common convention is that a p-value equal or lower than 5% is considered significant. In other words we accept the decision to reject H0 if the probability that our results are due to chance rather than to a true association, is 5% or lower.

This means that in our example we assume that the probability that the difference in occurrence of botulism between those who ate olives and those who did not is mainly caused by chance (coincidence) is 1.6%. In other words highly unlikely that only chance can explain this difference.

Please note that this does not prove the difference caused by eating olives, just that we can likely rule out chance as an explanation. For further reading, see the chapter on Causal Inference.

FEM PAGE CONTRIBUTORS 2007

Editor
Arnold Bosman

Contributors