Difference between revisions of "The idea of Statistical Inference"

From
Jump to: navigation, search
(Created page with " Category:Significance and Confidence")
 
m
 
Line 1: Line 1:
 +
The general idea of statistical inference is to find out a certain "truth" about a population by investigating a sample rather than the entire population. The investigation can be descriptive (for example, to find out the true occurrence of a disease) or analytical (for example, to test the hypothesis that people who have eaten home-preserved green olives are more at risk of developing botulism than those who did not eat those olives).
 +
 +
Statistical Inference is the process of drawing conclusions about the entire population based on the investigation of a sample. So it is a form of generalisation.
 +
 +
This process differs from causal inference, which is explained elsewhere.
 +
 +
=Significance tests=
 +
In order to make the conclusions objective, statistical tests are usually applied, with the aim of reaching a decision ('yes' or 'no') on a difference (or 'effect'), on a probabilistic basis, on observed data. Such statistical tests are also called significance tests, which all have in common that they require a Null Hypothesis (H<sub>0</sub>): "There is no difference (no effect) between the groups that we compare".
 +
 +
A Null Hypothesis (H<sub>0</sub>) will always have a complementary Alternative Hypothesis (H<sub>1</sub>): "There is a difference between the groups that we compare" (in other words: the Null Hypothesis is not true).
 +
 +
The aim of a significance test is to help us decide to reject the Null Hypothesis or not.
 +
 +
In our example, we could write the Null Hypothesis like this:
 +
 +
"There is no difference in the occurrence of botulism in the population between the people that have eaten home-preserved green olives (=exposed) and those that did not (=unexposed)".
 +
 +
Such a hypothesis makes it easier to design a study to test this: we need to take a representative sample of the exposed people and a representative sample of those who were unexposed. In both samples, we measure the occurrence of botulism and we compare the results.
 +
 +
The next challenge is how different the results must be to make us reject the H<sub>0</sub>?
 +
 +
This is the point where the p-value will help our decision. This value will tell us what is the probability (p) of finding the difference that we have observed (between our samples) if the Null Hypothesis H0 is true. The lower this p-value, the lower the probability that chance alone can explain the difference between the results in our samples when there really is no difference in the total population.
 +
 +
This requires that we investigate and quantify the probability to be different from the expected.
 +
 +
 +
=Making a decision on H<sub>0</sub>.=
 +
If we have convinced ourselves that the occurrence of botulism is significantly different between the exposed (who ate olives) and the non-exposed, then we can decide to reject the Null Hypothesis.
 +
 +
Now in taking a decision on H0, we can make two possible errors:
 +
 +
The null hypothesis is true but rejected: Type I error (α-error)
 +
The alternative hypothesis is true, but the null hypothesis is not rejected: Type II error (β-error)
 +
Please note that statistical tests only allow us to decide whether to reject H0 or not. This is different from deciding to accept H<sub>0</sub>, or accept H1.
 +
 +
=Problems in applying significance tests in observational studies=
 +
In these examples, we had applied significance tests to an observational study: an outbreak has occurred within a population at risk (guests in a restaurant) and retrospectively, we tests hypotheses on data observed from events that took place before we formulated the hypotheses.
 +
 +
One of the criticisms often given regarding interpreting such epidemiological studies is that no random assignment of subjects to groups (exposed, non-exposed) took place. Randomisation aims to get an equal distribution of other risk factors that have not been measured (or even discovered). The gold standard for such studies is the randomised controlled trial, preferably where the investigators and subjects are blinded to the assignment in exposed and unexposed.
 +
 +
In such designs where everything except the exposure of interest is randomised, the significance tests produce a p-value that truly reflects the probability that chance produced the differences in results between study groups.
 +
 +
In observational studies, we must be aware that we observe 'experiments of nature' (such as outbreaks) where assigning people to exposed and non-exposed is rarely a fully random process. For this reason, many critics say that the p-value in such circumstances should be considered to have a descriptive nature and caution should be exercised in case of statistical inference.
 +
 +
Part of this problem is related to concepts of bias and confounding.
 +
 +
==FEM PAGE CONTRIBUTORS 2007==
 +
; Editor
 +
: Arnold Bosman
 +
; Contributors
 +
: Manuel Dehnert
 +
: Arnold Bosman
  
  
 
[[Category:Significance and Confidence]]
 
[[Category:Significance and Confidence]]

Latest revision as of 21:30, 25 March 2023

The general idea of statistical inference is to find out a certain "truth" about a population by investigating a sample rather than the entire population. The investigation can be descriptive (for example, to find out the true occurrence of a disease) or analytical (for example, to test the hypothesis that people who have eaten home-preserved green olives are more at risk of developing botulism than those who did not eat those olives).

Statistical Inference is the process of drawing conclusions about the entire population based on the investigation of a sample. So it is a form of generalisation.

This process differs from causal inference, which is explained elsewhere.

Significance tests

In order to make the conclusions objective, statistical tests are usually applied, with the aim of reaching a decision ('yes' or 'no') on a difference (or 'effect'), on a probabilistic basis, on observed data. Such statistical tests are also called significance tests, which all have in common that they require a Null Hypothesis (H0): "There is no difference (no effect) between the groups that we compare".

A Null Hypothesis (H0) will always have a complementary Alternative Hypothesis (H1): "There is a difference between the groups that we compare" (in other words: the Null Hypothesis is not true).

The aim of a significance test is to help us decide to reject the Null Hypothesis or not.

In our example, we could write the Null Hypothesis like this:

"There is no difference in the occurrence of botulism in the population between the people that have eaten home-preserved green olives (=exposed) and those that did not (=unexposed)".

Such a hypothesis makes it easier to design a study to test this: we need to take a representative sample of the exposed people and a representative sample of those who were unexposed. In both samples, we measure the occurrence of botulism and we compare the results.

The next challenge is how different the results must be to make us reject the H0?

This is the point where the p-value will help our decision. This value will tell us what is the probability (p) of finding the difference that we have observed (between our samples) if the Null Hypothesis H0 is true. The lower this p-value, the lower the probability that chance alone can explain the difference between the results in our samples when there really is no difference in the total population.

This requires that we investigate and quantify the probability to be different from the expected.


Making a decision on H0.

If we have convinced ourselves that the occurrence of botulism is significantly different between the exposed (who ate olives) and the non-exposed, then we can decide to reject the Null Hypothesis.

Now in taking a decision on H0, we can make two possible errors:

The null hypothesis is true but rejected: Type I error (α-error) The alternative hypothesis is true, but the null hypothesis is not rejected: Type II error (β-error) Please note that statistical tests only allow us to decide whether to reject H0 or not. This is different from deciding to accept H0, or accept H1.

Problems in applying significance tests in observational studies

In these examples, we had applied significance tests to an observational study: an outbreak has occurred within a population at risk (guests in a restaurant) and retrospectively, we tests hypotheses on data observed from events that took place before we formulated the hypotheses.

One of the criticisms often given regarding interpreting such epidemiological studies is that no random assignment of subjects to groups (exposed, non-exposed) took place. Randomisation aims to get an equal distribution of other risk factors that have not been measured (or even discovered). The gold standard for such studies is the randomised controlled trial, preferably where the investigators and subjects are blinded to the assignment in exposed and unexposed.

In such designs where everything except the exposure of interest is randomised, the significance tests produce a p-value that truly reflects the probability that chance produced the differences in results between study groups.

In observational studies, we must be aware that we observe 'experiments of nature' (such as outbreaks) where assigning people to exposed and non-exposed is rarely a fully random process. For this reason, many critics say that the p-value in such circumstances should be considered to have a descriptive nature and caution should be exercised in case of statistical inference.

Part of this problem is related to concepts of bias and confounding.

FEM PAGE CONTRIBUTORS 2007

Editor
Arnold Bosman
Contributors
Manuel Dehnert
Arnold Bosman

Contributors