Validity and accuracy

From
Revision as of 19:25, 29 March 2023 by Bosmana fem (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Validity

Validity describes whether the results of an experiment really do measure the concept being tested. In other words, it says something about the experiment (or study, or surveillance system) as a whole; the design, methods and tests included. If the design or the choice of methods is inappropriate in relation to the aim of the experiment, then the results will be considered not valid, even when the tests used have produced accurate measurements. Likewise, if the design and methods of s study have been chosen appropriately (regarding the aim of the study), then the validity of the results will be mostly determined by the accuracy of the tests used. Therefore the rest of this article will focus in more detail on the concept of accuracy.

Consistency in the production of good results requires a standardized operating procedure that includes quality assurance, quality control, and quality assessment [1].

Accuracy

The accuracy (performance) of a diagnostic test is expressed in four dimensions (sensitivity, specificity, positive predictive value and negative predictive value), The prevalence of the disease or condition tested for affects some - but not all - of the test performance characteristics [1].


Does the person truly have the condition?
YES NO
Test result Positive A (true positive) B (false positive) A + B
Negative C (false negative) D (true negative C + D
A + C B + D A + B + C + D
5861.image001.png-550x0.png




The sensitivity of a diagnostic test measures the proportion of those people who have the disease and are correctly detected by the test (test positive). The sensitivity of a test can only be measured among those for whom the diagnosis has already been confirmed by other means than the test under study.

5488.image001.png-550x0.png





The specificity of a diagnostic test is the proportion of those people who do not have the disease and are correctly left undetected by the test (test negative). The specificity of a test can only be measured among those for whom the diagnosis has already been confirmed by other means than the test under study.

Predictive values

We perform a diagnostic test because we do not know the diagnosis. The real questions to be answered when performing a diagnostic test are: "What proportion of the patients tested as positive really have the disease?" and "What proportion of the patients tested as negative do not have the disease?". These questions can be answered by calculating the positive and negative predictive values of a test.

2654.image001.png-550x0.png





The positive predictive value (PPV) of a diagnostic test is the proportion of those testing positive and truly have the disease. The higher the positive predictive value, the higher the likelihood that a person tested positive truly has the disease. The PPV is high when the specificity is high. A high prevalence of the disease or condition tested for in the population increases the PPV.

1663.image001.png-550x0.png





he negative predictive value (NPV) of a diagnostic test is the proportion of those testing negative and are truly disease free. The more sensitive a test, the less likely it is that a negative result will be a true positive - and hence the higher the negative predictive value. The higher the negative predictive value, the higher the likelihood that a person tested negative truly is disease free. A high prevalence of the disease or condition tested for in the population decreases the NPV.

Examples The examples below show how to calculate sensitivity, specificity, positive predictive value and the negative predictive value. The examples also show that the same test performs differently depending on the prevalence of the disease or condition tested for. If sensitivity and specificity are kept constant, the positive predictive value increases and the negative predictive value decrease with increasing prevalence.

If the prevalence is low, a test with a good sensitivity and specificity will have a low positive predictive value. Even if only a small proportion of non diseased persons will have a positive test, those with false positive test results will represent the majority of the positive tests. On the other hand the negative positive value will be high because false negatives will only represent a very small proportion of all negative results.

Example 1: Cancer test - medium prevalence (250/1000)


Cancer test 1
YES NO
Test result Positive 200 25 225
Negative 50 725 775
250 750 1000
Sensitivity = 200 / 250 = 0.80 or 80%
Specificity = 725 / 750 = 0.97 or 97%
Positive predictive value = 200 / 225 = 0.89 or 89%
Negative predictive value = 725 / 775 = 0.94 or 94%

Example 2: Cancer test - low prevalence (125/1000)


Cancer test 1
YES NO
Test result Positive 100 26 126
Negative 25 849 874
125 875 1000
Sensitivity = 100 / 125 = 0.80 or 80%
Specificity = 849 / 875 = 0.97 or 97%
Positive predictive value = 100 / 126 = 0.79 or 79%
Negative predictive value = 849 / 874 = 0.97 or 97%

Example 3: Cancer test - high prevalence (500/1000)

Cancer test 1
YES NO
Test result Positive 400 15 415
Negative 100 485 585
500 500 1000
Sensitivity = 400 / 500 = 0.80 or 80%
Specificity = 485 / 500 = 0.97 or 97%
Positive predictive value = 400 / 415 = 0.96 or 96%
Negative predictive value = 485 / 585 = 0.83 or 83%

References

1. Sheringham J, Kalim K, Crayford T. Mastering Public Health: A guide to examinations and revalidation. ISBN-13 978-1-85315-781-3.


FEM PAGE CONTRIBUTORS 2007

Editor
Maarten Hoek
Original Authors
Julia Fitzner
Alain Moren
Contributors
Lisa Lazareck
Arnold Bosman
Maarten Hoek

Contributors