Difference between revisions of "Category:Surveillance attributes for evaluation"
Bosmana fem (talk | contribs) m (→Simplicity) |
Bosmana fem (talk | contribs) m (Bosmana fem moved page Surveillance attributes for evaluation to Category:Surveillance attributes for evaluation) |
||
(2 intermediate revisions by the same user not shown) | |||
Line 33: | Line 33: | ||
A public health surveillance system that is representative, accurately describes the occurrence of a health-related event over time and its distribution in the population by place and person [CDC guidelines] | A public health surveillance system that is representative, accurately describes the occurrence of a health-related event over time and its distribution in the population by place and person [CDC guidelines] | ||
− | Representativeness: system accurately describes disease occurrence over time and its distribution in the population. Knowledge on the representativeness of surveillance data on the national level is important for some of the proposed specific objectives of EU-wide surveillance. Cases notified to a surveillance system may be derived, e.g. for practical reasons, unevenly from the population under surveillance and, therefore, not representative of the population's events in general. The data would thus reflect poorly the situation nationally and on EU level. Over time, the representativeness of surveillance data may change for several reasons, such as changes in legislation, surveillance infrastructure, clinical practice and reimbursement policies. A change in representativeness will lead to wrong conclusions when e.g., trends in surveillance data are compared between countries or on EU level. Stability in the representativeness on the national surveillance level needs to be monitored, and the quantitative effect of changes on representativeness assessed [Ruutu et al.] | + | '''Representativeness''': system accurately describes disease occurrence over time and its distribution in the population. Knowledge on the representativeness of surveillance data on the national level is important for some of the proposed specific objectives of EU-wide surveillance. Cases notified to a surveillance system may be derived, e.g. for practical reasons, unevenly from the population under surveillance and, therefore, not representative of the population's events in general. The data would thus reflect poorly the situation nationally and on EU level. Over time, the representativeness of surveillance data may change for several reasons, such as changes in legislation, surveillance infrastructure, clinical practice and reimbursement policies. A change in representativeness will lead to wrong conclusions when e.g., trends in surveillance data are compared between countries or on EU level. Stability in the representativeness on the national surveillance level needs to be monitored, and the quantitative effect of changes on representativeness assessed [Ruutu et al.] |
=Usefulness= | =Usefulness= | ||
Line 81: | Line 81: | ||
* (CDC) Guidelines for Evaluating Surveillance Systems. Prepared by Douglas N. Klaucke, James W. Buehler, Stephen B. Thacker, R. Gibson Parrish, Frederick L. Trowbridge, Ruth L. Berkelman, and the Surveillance Coordination Group. MMWR Supplement, May 06, 1988 / 37(S-5);1-18. | * (CDC) Guidelines for Evaluating Surveillance Systems. Prepared by Douglas N. Klaucke, James W. Buehler, Stephen B. Thacker, R. Gibson Parrish, Frederick L. Trowbridge, Ruth L. Berkelman, and the Surveillance Coordination Group. MMWR Supplement, May 06, 1988 / 37(S-5);1-18. | ||
* (CDC) Updated Guidelines for Evaluating Public Health Surveillance Systems; Recommendations from the Guidelines Working Group. MMWR. July 27, 2001 / 50(RR13);1-35 | * (CDC) Updated Guidelines for Evaluating Public Health Surveillance Systems; Recommendations from the Guidelines Working Group. MMWR. July 27, 2001 / 50(RR13);1-35 | ||
+ | |||
+ | |||
+ | |||
+ | [[Category:Evaluation of Surveillance Systems]] |
Latest revision as of 21:25, 22 March 2023
Contents
Completeness
Completeness can be considered as having two separate dimensions; internal and external completeness.
Internal completeness refers to whether there are missing records or data fields and can be defined as “the frequency of unknown or blank responses to data items in the system.”
External completeness relates to whether the data available to the surveillance system reflects the true number of cases diagnosed with notifiable conditions [Doyle]. One approach to evaluating external completeness consists in the comparison of at least two datasets from different sources of information that are supposed to provide surveillance information on the same disease (e.g. laboratory and notification data for case reporting of salmonellosis). A common method to measure external completeness is “capture-recapture”. However, other methods can certainly be used to compare datasets depending on the disease under surveillance, the nature/accessibility of data sources, and other parameters to be defined.
Note: consider the completeness of data within a case data or completeness of data within a database (set of cases collected for a time window).
Validity
Validity describes whether the results of an experiment or study really do measure the concept being tested. In the context of surveillance would be the capacity to capture the “true value” for incidence, prevalence or other variables that are useful analysis of surveillance data. The “true value” should be viewed in the context of the surveillance system and objects, for example, it may relate to only those cases diagnosed by health services under surveillance. Validity can be considered to comprise of both internal and external dimensions, where:
Internal validity relates to the extent of errors within the system, e.g. coding errors in translating from one level of the system to the next.
External validity relates to whether the information recorded about the cases is correct and exact. Evaluating external validity implies the comparison of a surveillance indicator measured in a dataset to a “gold standard” value [Doyle]. One possible way to conduct validation study is to compare data recorded in the studied dataset to the original medical records. If data on a same patient are recorded at different points in time for the same information (disease/variable), differences can be due to a “real change“ over time or a bias in the measurement. Reliability studies can help identify this type of bias.
Sensitivity
Sensitivity as a proportion of persons diagnosed with the condition, detected by the system:Sensitivity of the EU-wide surveillance sub-network reflects the combined sensitivity of the national surveillance systems and of the international network, and can be defined on three levels relevant to EU-wide surveillance: (a) the proportion of cases notified to the national system that were reported to the international coordinating centre of the sub-network; (b) the proportion of cases fulfilling the standard case definition, diagnosed at the local level, that were notified to the national system; (c) the proportion of cases detected by the national surveillance system out of all cases truly occurring in the population without respect to whether cases sought medical care or a laboratory diagnosis was attempted (can usually only be determined by special studies). In practice, the sensitivity of the national surveillance systems will determine the sensitivity of the overall surveillance system. However, the sensitivity of national surveillance systems, ie the ratio between (a) and (b) will vary widely from country to country for specific diseases.
When knowledge of the differences in the sensitivity of national surveillance systems is important to the objectives of the EU-wide surveillance sub-network, country-specific investigations need to be implemented with a defined methodology to determine the sensitivity of the national surveillance systems and to form a basis for comparability of the country-specific data. Stringent criteria for inclusion of cases in several of the existing networks will make EU-wide sensitivity lower than national [Ruutu et al]
The sensitivity of a surveillance system can be considered on two levels. First, at the level of case reporting, sensitivity refers to the proportion of cases of a disease (or other health-related events) detected by the surveillance system. Second, sensitivity can refer to the ability to detect outbreaks and monitor changes in the number of cases over time. [CDC guidelines]
Predictive value positive (PVP) is the proportion of reported cases that actually have the health-related event under surveillance [CDC guidelines]
Timeliness
Timeliness reflects the speed between steps in a public health surveillance system [CDC guidelines]
Reactivity would reflect the retro-information and delay necessary to initiate a public health action.
Representativeness
A public health surveillance system that is representative, accurately describes the occurrence of a health-related event over time and its distribution in the population by place and person [CDC guidelines]
Representativeness: system accurately describes disease occurrence over time and its distribution in the population. Knowledge on the representativeness of surveillance data on the national level is important for some of the proposed specific objectives of EU-wide surveillance. Cases notified to a surveillance system may be derived, e.g. for practical reasons, unevenly from the population under surveillance and, therefore, not representative of the population's events in general. The data would thus reflect poorly the situation nationally and on EU level. Over time, the representativeness of surveillance data may change for several reasons, such as changes in legislation, surveillance infrastructure, clinical practice and reimbursement policies. A change in representativeness will lead to wrong conclusions when e.g., trends in surveillance data are compared between countries or on EU level. Stability in the representativeness on the national surveillance level needs to be monitored, and the quantitative effect of changes on representativeness assessed [Ruutu et al.]
Usefulness
A public health surveillance system is useful if it contributes to preventing and controlling adverse health-related events, including an improved understanding of the public health implications of such events. A public health surveillance system can also be useful if it helps to determine that an adverse health-related event previously thought to be unimportant is actually important. In addition, data from a surveillance system can contribute to performance measures, including health indicators used in needs assessments and accountability systems [CDC guidelines].
Simplicity
Definition. A public health surveillance system's simplicity refers to its structure and ease of operation. Surveillance systems should be as simple as possible while still meeting their objectives. (CDC Updated Guidelines, 2001)
The following measures might be considered in evaluating the simplicity of a system:
- amount and type of data necessary to establish that the health-related event has occurred (i.e., the case definition has been met);
- amount and type of other data on cases (e.g., demographic, behavioral, and exposure information for the health-related event);
- number of organizations involved in receiving case reports;
- level of integration with other systems;
- method of collecting the data, including the number and types of reporting sources and time spent on collecting data;
- amount of follow-up that is necessary to update data on the case;
- method of managing the data, including time spent on transferring, entering, editing, storing, and backing up data;
- methods for analyzing and disseminating the data, including time spent on preparing the data for dissemination;
- staff training requirements; and
- time spent on maintaining the system.
Thinking of the simplicity of a public health surveillance system from the design perspective might be useful. An example of a system that is simple in design is one with a case definition that is easy to apply (i.e., the case is easily ascertained) and in which the person identifying the case will also be the one analyzing and using the information. A more complex system might involve some of the following:
- special or follow-up laboratory tests to confirm the case;
- investigation of the case, including telephone contact or a home visit by public health personnel to collect detailed information;
- multiple levels of reporting (e.g., with the National Notifiable Diseases Surveillance System, case reports might start with the healthcare provider who makes the diagnosis and pass through county and state health departments before going to CDC [29]); and
- integration of related systems whereby special training is required to collect and/or interpret data.
Simplicity is closely related to acceptance and timeliness. Simplicity also affects the number of resources required to operate the system.
Flexibility
A flexible public health surveillance system can adapt to changing information needs or operating conditions with little additional time, personnel, or allocated funds. Flexible systems can accommodate, for example, new health-related events, changes in case definitions or technology, and variations in funding or reporting sources. In addition, systems that use standard data formats (e.g., in electronic data interchange) can be easily integrated with other systems and thus might be considered flexible [CDC guidelines].
Acceptability
Acceptability reflects the willingness of persons and organizations to participate in the surveillance system [CDC guidelines]
Acceptability is influenced substantially by the time and efforts required to complete and submit reports or perform other surveillance tasks.
A surveillance system's simplicity refers to its structure and ease of operation. Surveillance systems should be as simple as possible while still meeting their objectives.
Stability
Stability refers to the reliability (i.e., the ability to collect, manage, and provide data properly without failure) and availability (the ability to be operational when it is needed) of the public health surveillance system [CDC guidelines]
The adequacy would refer to the surveillance system's ability to address its objectives.
References
- (CDC) Guidelines for Evaluating Surveillance Systems. Prepared by Douglas N. Klaucke, James W. Buehler, Stephen B. Thacker, R. Gibson Parrish, Frederick L. Trowbridge, Ruth L. Berkelman, and the Surveillance Coordination Group. MMWR Supplement, May 06, 1988 / 37(S-5);1-18.
- (CDC) Updated Guidelines for Evaluating Public Health Surveillance Systems; Recommendations from the Guidelines Working Group. MMWR. July 27, 2001 / 50(RR13);1-35
Pages in category "Surveillance attributes for evaluation"
This category contains only the following page.