Category:Analysis of Surveillance Data
Principles of surveillance emphasize, among others, continuous and systematic processes with the need to collect data from different sources and regular or ad hoc outputs. The validity and accuracy of data are usually ascertained during the collection phase. Data quality has different aspects related to processes generating and delivering the data. Descriptive analysis can show irregularities and deviations from the usual pattern, which should trigger first-quality questions. To analyze surveillance data consists of checking its quality and performing descriptive epidemiology - to organize data by time, place, and person - to understand who is ill, where, and when, to detect abnormal health events such as clustering or change in trends requiring investigation and public health action. Analysis of time-series data includes detecting unusual occurrences in relation to threshold values. Long-term trends may reflect the success of preventive or repressive measures or future needs in public health action. For these purposes, smoothing techniques may clean the data from cyclicities and interfering factors. Spatial data distribution is related to the natural occurrence of people with similar population characteristics and environmental factors, and their correlation with disease occurrence can be evaluated. Death, disease, disability, and any individual health event may be analyzed regularly or on demand in the "response" phase.
Credits
FEM Editors 2007
- Arnold Bosman
- Denis Coulombier
Original Authors
- Christophe Paquet
- Arnold Tarantola
- Philippe Quenel
- Nada Ghosn
FEM Contributors
- Denis Coulombier
- Lisa Lazareck
- Patty Kostkova
- Vladimir Prikazsky
Pages in category "Analysis of Surveillance Data"
The following 2 pages are in this category, out of 2 total.