- Information
- AI Chat
CDC-Evaluation-Framework
Microeconomics Theory 2 (ECN407)
Université de Buéa
Students also viewed
- RC 46139157 08-14
- MOFA Project - Grade: 7,5
- Some Gen courses - vey good outline to help teachers and lecturers teach with .Its a lso good for
- Introduction to finance and Economics using various sampling techniques
- Introduction to finance and Economics using various sampling techniques
- 16-ME-tal 001 - Praticals
Preview text
Inside:
Continuing Education Examination
September 17, 1999 / Vol. 48 / No. RR-
Recommendations
and
Reports
U. DEPARTMENT OF HEALTH & HUMAN SERVICES Centers for Disease Control and Prevention (CDC) Atlanta, Georgia 30333
Framework for Program Evaluation
in Public Health
Inside:
Continuing Education Examination
Copies can be purchased from Superintendent of Documents, U. Government Printing Office, Washington, DC 20402-9325. Telephone: (202) 512-1800.
Use of trade names and commercial sources is for identification only and does not imply endorsement by the U. Department of Health and Human Services.
The MMWR series of publications is published by the Epidemiology Program Office, Centers for Disease Control and Prevention (CDC), U. Department of Health and Hu- man Services, Atlanta, GA 30333.
Centers for Disease Control and Prevention .................... Jeffrey P. Koplan, M., M.P. Director
The production of this report as an MMWR serial publication was coordinated in
Epidemiology Program Office................................................ Barbara Holloway, M.P. Acting Director
Office of Scientific and Health Communications ..................... W. Ward, M. Director Editor, MMWR Series Recommendations and Reports................................... Suzanne M. Hewitt, M.P. Managing Editor C. Kay Smith-Akin, M. Project Editor Morie M. Higgins Peter M. Jenkins Visual Information Specialists
SUGGESTED CITATION
Centers for Disease Control and Prevention. Framework for program evaluation in public health. MMWR 1999;48(No. RR-11):[inclusive page numbers].
References to sites of non-CDC organizations on the Internet are provided as a serv- ice to MMWR readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U. Department of Health and Hu- man Services. CDC is not responsible for the content of pages found at these sites.
FOREWORD
Health improvement is what public health professionals strive to achieve. To reach this goal, we must devote our skill — and our will — to evaluating the effects of public health actions. As the targets of public health actions have expanded beyond infec- tious diseases to include chronic diseases, violence, emerging pathogens, threats of bioterrorism, and the social contexts that influence health disparities, the task of evaluation has become more complex. CDC developed the framework for program evaluation to ensure that amidst the complex transition in public health, we will remain accountable and committed to achieving measurable health outcomes. By integrating the principles of this framework into all CDC program operations, we will stimulate innovation toward outcome improvement and be better positioned to detect program effects. More efficient and timely detection of these effects will enhance our ability to translate findings into practice. Guided by the steps and stand- ards in the framework, our basic approach to program planning will also evolve. Findings from prevention research will lead to program plans that are clearer and more logical; stronger partnerships will allow collaborators to focus on achieving common goals; integrated information systems will support more systematic meas- urement; and lessons learned from evaluations will be used more effectively to guide changes in public health strategies. Publication of this framework also emphasizes CDC’s continuing commitment to improving overall community health. Because categorical strategies cannot succeed in isolation, public health professionals working across program areas must collabo- rate in evaluating their combined influence on health in the community. Only then will we be able to realize and demonstrate the success of our vision — healthy people in a healthy world through prevention.
Jeffrey P. Koplan, M., M.P. Director, Centers for Disease Control and Prevention Administrator, Agency for Toxic Substances and Disease Registry
ii MMWR September 17, 1999
The following CDC staff members prepared this report:
Robert L. Milstein, M.P. Office of Program Planning and Evaluation Office of the Director
Scott F. Wetterhall, M., M.P. Chair, CDC Evaluation Working Group Office of Program Planning and Evaluation Office of the Director
in collaboration with CDC Evaluation Working Group Members
Gregory M. Christenson, Ph. Diane Dennis-Flagler Division of Health Education and Promotion Agency for Toxic Substances and Disease Registry Jeffrey R. Harris, M. Donna L. Higgins, Ph. Kenneth A. Schachter, M., M.B. Division of Prevention Research and Analytic Methods Nancy F. Pegg, M.B. Office of the Director Epidemiology Program Office Janet L. Collins, Ph., M. Division of Adolescent and School Health Diane O. Dunet, M.P. Division of Cancer Prevention and Control Aliki A. Pappas, M.P., M.S. Division of Oral Health National Center for Chronic Disease Prevention and Health Promotion Alison E. Kelly, M.P.I. Office of the Director National Center for Environmental Health Paul J. Placek, Ph. Office of Data Standards, Program Development, and Extramural Programs National Center for Health Statistics Michael Hennessy, Ph., M.P. Division of STD Prevention Deborah L. Rugg, Ph. Division of HIV/AIDS Prevention — Intervention, Research, and Support National Center for HIV, STD, and TB Prevention
Vol. 48 / No. RR-11 MMWR iii
Additional CDC Contributors
Office of the Director: Lynda S. Doll, Ph., M.; Charles W. Gollmar; Richard A. Goodman, M., M.P.; Wilma G. Johnson, M.S.P.; Marguerite Pappaioanou, D.V., Ph., M.P.V.; David J. Sencer, M., M.P. (Retired); Dixie E. Snider, M., M.P.; Marjorie A. Speers, Ph.; Lisa R. Tylor; and Kelly O’Brien Yehl, M.P. (Wash- ington, D.). Agency for Toxic Substances and Disease Registry: Peter J. McCumiskey and Tim L. Tinker, Dr.P., M.P. Epidemiology Program Office: Jeanne L. Alongi, M.P. (Public Health Prevention Service); Peter A. Briss, M.; Andrew L. Dannenberg, M., M.P.; Daniel B. Fishbein, M.; Dennis F. Jarvis, M.P.; Mark L. Messonnier, Ph., M; Bradford A. Myers; Raul A. Romaguera, D.M., M.P.; Steven B. Thacker, M., M.; Benedict I. Truman, M., M.P.; Katherine R. Turner, M.P. (Public Health Prevention Service); Jennifer L. Wiley, M.H.S. (Public Health Prevention Service); G. David Williamson, Ph.; and Stephanie Zaza, M., M.P. National Center for Chronic Disease Prevention and Health Promotion: Cynthia M. Jorgensen, Dr.P.; Marshall W. Kreuter, Ph., M.P.; R. Brick Lancaster, M.; Imani Ma’at, Ed., Ed., M.C.; Elizabeth Majestic, M., M.P.; David V. McQueen, Sc., M.; Diane M. Narkunas, M.P.; Dearell R. Niemeyer, M.P.; and Lori B. de Ravello, M.P. National Center for Environmental Health: Jami L. Fraze, M.S.; Joan L. Morrissey; William C. Parra, M.; Judith R. Qualters, Ph.; Michael J. Sage, M.P.; Joseph B. Smith; and Ronald R. Stoddard. National Center for Health Statistics: Marjorie S. Greenberg, M. and Jennfier H. Madans, Ph. National Center for HIV, STD, and TB Prevention: Huey-Tsyh Chen, Ph.; Janet C. Cleveland, M.; Holly J. Dixon; Janice P. Hiland, M.; Richard A. Jenkins, Ph.; Jill K. Leslie; Mark N. Lobato, M.; Kathleen M. MacQueen, Ph., M.P.; and Noreen L. Qualls, Dr.P., M.S.P. National Center for Injury Prevention and Control: Christine M. Branche, Ph.; Linda L. Dahlberg, Ph.; and David A. Sleet, Ph., M. National Immunization Program: Susan Y. Chu, Ph., M.S.P. and Lance E. Rodewald, M. National Institute for Occupational Safety and Health: Linda M. Goldenhar, Ph. and Travis Kubale, M.S. Public Health Practice Program Office: Patricia Drehobl, M.P.; Michael T. Hatcher, M.P.; Cheryl L Scott, M., M.P.; Catherine B. Shoemaker, M.; Brian K. Siegmund, M.S., M.; and Pomeroy Sinnock, Ph.
Vol. 48 / No. RR-11 MMWR v
Consultants and Contributors
Suzanne R. Adair, Ph., M., Texas Department of Health, Austin, Texas; Mary Eden Avery, M., American Cancer Society, Atlanta, Georgia; Ronald Bialek, M.P., Public Health Foundation, Washington, D.; Leonard Bickman, Ph., Vanderbilt Uni- versity, Nashville, Tennessee; Thomas J. Chapel, M., M.B., Macro International, Atlanta, Georgia; Don Compton, Ph., American Cancer Society, Atlanta, Georgia; Ross F. Conner, Ph., University of California Irvine, Irvine, California; David A. Cotton, Ph., M.P., Macro International, Atlanta, Georgia; Bruce Davidson, M., M.P., National Tuberculosis Controllers Association, Philadelphia, Pennsylvania; Mary V. Davis, Dr.P., Association of Teachers of Preventive Medicine, Washington, D.; William W. Dyal, DeKalb County Board of Health, Decatur, Georgia; Stephen B. Faw- cett, Ph., University of Kansas, Lawrence, Kansas; Jane Ford, National Association of City and County Health Officers, Lincoln, Nebraska; Nicholas Freudenberg, Dr.P., M.P., City University of New York, New York, New York; Jean Gearing, Ph., M.P., DeKalb County Board of Health, Decatur, Georgia; Kristine Gebbe, Dr.P., Columbia University, New York, New York; David N. Gillmore, M., University of Texas School of Public Health, Houston, Texas; Rebecca M. Glover Kudon, M.S.P., American Cancer Society, Atlanta, Georgia; Lynne E. Greabell, M.A., National Association of State and Territorial AIDS Directors, Washington, D.; Susan R. Griffin, M.P., Inde- pendent Consultant, Austin, Texas; Sharon Lee Hammond, Ph., M., Westat, Inc., Atlanta, Georgia; Anne C. Haddix, Ph., Rollins School of Pubic Health, Atlanta, Geor- gia; Susan E. Hassig, Dr.P., Tulane School of Public Health and Tropical Medicine, New Orleans, Louisiana; Gary T. Henry, Ph., M., Georgia State University, Atlanta, Georgia; James C. Hersey, Ph., M., M., Research Triangle Institute, Research Triangle Park, North Carolina; Richard E. Hoffman, M., M.P., Council of State and Territorial Epidemiologists, Denver, Colorado; Robert C. Hornik, Ph., M., Annen- berg School of Communication, Philadelphia, Pennsylvania; Eric Juzenas, American Public Health Association, Washington, D.; Mark R. Keegan, M.B., Association of State and Territorial Health Officers, Denver, Colorado; Jeffrey J. Koshel, M., Depart- ment of Health and Human Services, Washington, D.; Amy K. Lewis, M.P., North Carolina Department of Health and Human Services, Raleigh, North Carolina; Jennifer M. Lewis, M., Association of Schools of Public Health, Chapel Hill, North Carolina; Hardy D. Loe, Jr., M., M.P., University of Texas School of Public Health, Houston, Texas; Anna Marsh, Substance Abuse and Mental Health Service Administration, Washington, D.; Pamela Mathison, M., Texas Department of Health, Austin, Texas; Dennis McBride, Ph., University of Washington, Seattle, Washington; Kathleen R. Miner, Ph., M.P., Rollins School of Public Health, Atlanta, Georgia; April J. Montgomery, M.H., Colorado Department of Health, Denver, Colorado; Genevieve A. Nagy, University of Kansas, Lawrence, Kansas; Dennis P. Murphy, M., National Coalition of STD Directors, Albany, New York; Patricia P. Nichols, M., Michigan Department of Education, Lansing, Michigan; Mary Odell Butler, Ph., M., Battelle, Arlington, Virginia; Carol Pitts, Department of Health and Human Services, Washing- ton, D.; Hallie Preskill, Ph., University of New Mexico, Albuquerque, New Mexico; Carol Roddy, Public Health Service, Washington, D.; Ken Duane Runkle, M., Illinois Department of Public Health, Springfield, Illinois; James R. Sanders, Ph., M.S., Western Michigan University, Kalamazoo, Michigan; Linda M. Scarpetta, M.P.,
vi MMWR September 17, 1999
- viii MMWR September 17,
Framework for Program Evaluation in Public Health
Summary Effective program evaluation is a systematic way to improve and account for public health actions by involving procedures that are useful, feasible, ethical, and accurate. The framework guides public health professionals in their use of program evaluation. It is a practical, nonprescriptive tool, designed to summa- rize and organize essential elements of program evaluation. The framework comprises steps in program evaluation practice and standards for effective pro- gram evaluation. Adhering to the steps and standards of this framework will allow an understanding of each program’s context and will improve how pro- gram evaluations are conceived and conducted. Furthermore, the framework encourages an approach to evaluation that is integrated with routine program operations. The emphasis is on practical, ongoing evaluation strategies that involve all program stakeholders, not just evaluation experts. Understanding and applying the elements of this framework can be a driving force for planning effective public health strategies, improving existing programs, and demon- strating the results of resource investments.
INTRODUCTION Program evaluation is an essential organizational practice in public health ( 1 ); however, it is not practiced consistently across program areas, nor is it sufficiently well-integrated into the day-to-day management of most programs. Program evalu- ation is also necessary for fulfilling CDC’s operating principles for guiding public health activities, which include a) using science as a basis for decision-making and public health action; b) expanding the quest for social equity through public health action; c) performing effectively as a service agency; d) making efforts outcome- oriented; and e) being accountable ( 2 ). These operating principles imply several ways to improve how public health activities are planned and managed. They underscore the need for programs to develop clear plans, inclusive partnerships, and feedback systems that allow learning and ongoing improvement to occur. One way to ensure that new and existing programs honor these principles is for each program to conduct routine, practical evaluations that provide information for management and improve program effectiveness. This report presents a framework for understanding program evaluation and facili- tating integration of evaluation throughout the public health system. The purposes of this report are to
summarize the essential elements of program evaluation;
provide a framework for conducting effective program evaluations;
clarify the steps in program evaluation;
review standards for effective program evaluation; and
Vol. 48 / No. RR-11 MMWR 1
Defining Key Concepts Throughout this report, the term program is used to describe the object of evalu- ation, which could be any organized public health action. This definition is deliberately broad because the framework can be applied to almost any organized public health activity, including direct service interventions, community mobilization efforts, research initiatives, surveillance systems, policy development activities, outbreak investigations, laboratory diagnostics, communication campaigns, infrastructure- building projects, training and educational services, and administrative systems. The additional terms defined in this report were chosen to establish a common evaluation vocabulary for public health professionals.
Integrating Evaluation with Routine Program Practice Evaluation can be tied to routine program operations when the emphasis is on practical, ongoing evaluation that involves all program staff and stakeholders, not just evaluation experts. The practice of evaluation complements program management by gathering necessary information for improving and accounting for program effective- ness. Public health professionals routinely have used evaluation processes when answering questions from concerned persons, consulting partners, making judg- ments based on feedback, and refining program operations ( 9 ). These evaluation processes, though informal, are adequate for ongoing program assessment to guide small changes in program functions and objectives. However, when the stakes of potential decisions or program changes increase (e., when deciding what services to offer in a national health promotion program), employing evaluation procedures that are explicit, formal, and justifiable becomes important ( 10 ).
ASSIGNING VALUE TO PROGRAM ACTIVITIES Questions regarding values, in contrast with those regarding facts, generally involve three interrelated issues: merit (i., quality), worth (i., cost-effectiveness), and significance (i., importance) ( 3 ). If a program is judged to be of merit, other questions might arise regarding whether the program is worth its cost. Also, ques- tions can arise regarding whether even valuable programs contribute important differences. Assigning value and making judgments regarding a program on the basis of evidence requires answering the following questions ( 3,4,11 ):
What will be evaluated? (That is, what is the program and in what context does it exist?)
What aspects of the program will be considered when judging program perform- ance?
What standards (i., type or level of performance) must be reached for the pro- gram to be considered successful?
What evidence will be used to indicate how the program has performed?
What conclusions regarding program performance are justified by comparing the available evidence to the selected standards?
Vol. 48 / No. RR-11 MMWR 3
- How will the lessons learned from the inquiry be used to improve public health effectiveness?
These questions should be addressed at the beginning of a program and revisited throughout its implementation. The framework described in this report provides a sys- tematic approach for answering these questions.
FRAMEWORK FOR PROGRAM EVALUATION IN PUBLIC HEALTH Effective program evaluation is a systematic way to improve and account for public health actions by involving procedures that are useful, feasible, ethical, and accurate. The recommended framework was developed to guide public health professionals in using program evaluation. It is a practical, nonprescriptive tool, designed to sum- marize and organize the essential elements of program evaluation. The framework comprises steps in evaluation practice and standards for effective evaluation (Figure 1).
Steps
Standards
Utility
Propriety
Feasibility
Accuracy
Engage stakeholders
Gather credible evidence
Describe the program
Ensure use and share lessons learned
Focus the evaluation design
Justify conclusions
FIGURE 1. Recommended framework for program evaluation
4 MMWR September 17, 1999
those involved in program operations (e., sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff);
those served or affected by the program (e., clients, family members, neighbor- hood organizations, academic institutions, elected officials, advocacy groups, professional associations, skeptics, opponents, and staff of related or competing organizations); and
primary users of the evaluation.
Those Involved in Program Operations. Persons or organizations involved in program operations have a stake in how evaluation activities are conducted because the program might be altered as a result of what is learned. Although staff, funding officials, and partners work together on a program, they are not necessarily a single interest group. Subgroups might hold different perspectives and follow alternative agendas; furthermore, because these stakeholders have a professional role in the
Steps in Evaluation Practice
- Engage stakeholders Those persons involved in or affected by the program and primary users of the evaluation.
- Describe the program Need, expected effects, activities, resources, stage, context, logic model.
- Focus the evaluation design Purpose, users, uses, questions, methods, agreements.
- Gather credible evidence Indicators, sources, quality, quantity, logistics.
- Justify conclusions Standards, analysis/synthesis, interpretation, judgment, recommendations.
- Ensure use and share lessons learned Design, preparation, feedback, follow-up, dissemination.
Standards for Effective Evaluation
- Utility Serve the information needs of intended users.
- Feasibility Be realistic, prudent, diplomatic, and frugal.
- Propriety Behave legally, ethically, and with regard for the welfare of those involved and those affected.
- Accuracy Reveal and convey technically accurate information.
BOX 1. Steps in evaluation practice and standards for effective evaluation
6 MMWR September 17, 1999
program, they might perceive program evaluation as an effort to judge them person- ally. Program evaluation is related to but must be distinguished from personnel evaluation, which operates under different standards ( 13 ). Those Served or Affected by the Program. Persons or organizations affected by the program, either directly (e., by receiving services) or indirectly (e., by benefitting from enhanced community assets), should be identified and engaged in the evalu- ation to the extent possible. Although engaging supporters of a program is natural, individuals who are openly skeptical or antagonistic toward the program also might be important stakeholders to engage. Opposition to a program might stem from differing values regarding what change is needed or how to achieve it. Opening an evaluation to opposing perspectives and enlisting the help of program opponents in the inquiry might be prudent because these efforts can strengthen the evaluation’s credibility. Primary Users of the Evaluation. Primary users of the evaluation are the specific persons who are in a position to do or decide something regarding the program. In practice, primary users will be a subset of all stakeholders identified. A successful evaluation will designate primary users early in its development and maintain frequent interaction with them so that the evaluation addresses their values and satis- fies their unique information needs ( 7 ). The scope and level of stakeholder involvement will vary for each program evalu- ation. Various activities reflect the requirement to engage stakeholders (Box 2) ( 14 ). For example, stakeholders can be directly involved in designing and conducting the evaluation. Also, they can be kept informed regarding progress of the evaluation through periodic meetings, reports, and other means of communication. Sharing power and resolving conflicts helps avoid overemphasis of values held by any specific stakeholder ( 15 ). Occasionally, stakeholders might be inclined to use their involve- ment in an evaluation to sabotage, distort, or discredit the program. Trust among stakeholders is essential; therefore, caution is required for preventing misuse of the evaluation process.
Step 2: Describing the Program Program descriptions convey the mission and objectives of the program being evaluated. Descriptions should be sufficiently detailed to ensure understanding of program goals and strategies. The description should discuss the program’s capacity to effect change, its stage of development, and how it fits into the larger organization and community. Program descriptions set the frame of reference for all subsequent decisions in an evaluation. The description enables comparisons with similar pro- grams and facilitates attempts to connect program components to their effects ( 12 ). Moreover, stakeholders might have differing ideas regarding program goals and pur- poses. Evaluations done without agreement on the program definition are likely to be of limited use. Sometimes, negotiating with stakeholders to formulate a clear and logi- cal description will bring benefits before data are available to evaluate program effectiveness ( 7 ). Aspects to include in a program description are need, expected effects, activities, resources, stage of development, context, and logic model. Need. A statement of need describes the problem or opportunity that the program addresses and implies how the program will respond. Important features for describ- ing a program’s need include a) the nature and magnitude of the problem or
Vol. 48 / No. RR-11 MMWR 7
Stage of Development. Public health programs mature and change over time; therefore, a program’s stage of development reflects its maturity. Programs that have recently received initial authorization and funding will differ from those that have been operating continuously for a decade. The changing maturity of program practice should be considered during the evaluation process ( 22 ). A minimum of three stages of development must be recognized: planning, implementation, and effects. During planning, program activities are untested, and the goal of evaluation is to refine plans. During implementation, program activities are being field-tested and modified; the goal of evaluation is to characterize real, as opposed to ideal, program activities and to improve operations, perhaps by revising plans. During the last stage, enough time has passed for the program’s effects to emerge; the goal of evaluation is to identify and account for both intended and unintended effects. Context. Descriptions of the program’s context should include the setting and environmental influences (e., history, geography, politics, social and economic conditions, and efforts of related or competing organizations) within which the program operates ( 6 ). Understanding these environmental influences is required to design a context-sensitive evaluation and aid users in interpreting findings accurately and assessing the generalizability of the findings. Logic Model. A logic model describes the sequence of events for bringing about change by synthesizing the main program elements into a picture of how the program is supposed to work ( 23–35 ). Often, this model is displayed in a flow chart, map, or table to portray the sequence of steps leading to program results (Figure 2). One of the virtues of a logic model is its ability to summarize the program’s overall mechanism of change by linking processes (e., laboratory diagnosis of disease) to eventual effects (e., reduced tuberculosis incidence). The logic model can also display the infrastruc- ture needed to support program operations. Elements that are connected within a logic model might vary but generally include inputs (e., trained staff), activities (e., identification of cases), outputs (e., persons completing treatment), and results ranging from immediate (e., curing affected persons) to intermediate (e., reduction in tuberculosis rate) to long-term effects (e., improvement of population health status). Creating a logic model allows stakeholders to clarify the program’s strategies; therefore, the logic model improves and focuses program direction. It also reveals assumptions concerning conditions for program effectiveness and provides a frame of reference for one or more evaluations of the program. A detailed logic model can also strengthen claims of causality and be a basis for estimating the program’s effect on endpoints that are not directly measured but are linked in a causal chain supported by prior research ( 35 ). Families of logic models can be created to display a program at different levels of detail, from different perspectives, or for different audiences. Program descriptions will vary for each evaluation, and various activities reflect the requirement to describe the program (e., using multiple sources of information to construct a well-rounded description) (Box 3). The accuracy of a program description can be confirmed by consulting with diverse stakeholders, and reported descriptions of program practice can be checked against direct observation of activities in the field. A narrow program description can be improved by addressing such factors as staff turnover, inadequate resources, political pressures, or strong community participation that might affect program performance.
Vol. 48 / No. RR-11 MMWR 9
Step 3: Focusing the Evaluation Design The evaluation must be focused to assess the issues of greatest concern to stake- holders while using time and resources as efficiently as possible ( 7,36,37 ). Not all design options are equally well-suited to meeting the information needs of stakehold- ers. After data collection begins, changing procedures might be difficult or impossible, even if better methods become obvious. A thorough plan anticipates intended uses and creates an evaluation strategy with the greatest chance of being useful, feasible, ethical, and accurate. Among the items to consider when focusing an evaluation are purpose, users, uses, questions, methods, and agreements. Purpose. Articulating an evaluation’s purpose (i., intent) will prevent premature decision-making regarding how the evaluation should be conducted. Characteristics of the program, particularly its stage of development and context, will influence the evaluation’s purpose. Public health evaluations have four general purposes. (Box 4). The first is to gain insight, which happens, for example, when assessing the feasibility of an innovative approach to practice. Knowledge from such an evaluation provides information concerning the practicality of a new approach, which can be used to design a program that will be tested for its effectiveness. For a developing program, information from prior evaluations can provide the necessary insight to clarify how its activities should be designed to bring about expected changes.
Service components
Identify case
Identify contacts
Begin treatment
Complete treatment
Diagnose disease
Prescribe effective treatment
Cure case
Infrastructure components
Health information systems Trained staff
Improve population health status
Community trust Effective organization
Research results
Reduce tuberculosis incidence
FIGURE 2. Logic model for a tuberculosis control program
10 MMWR September 17, 1999
CDC-Evaluation-Framework
Course: Microeconomics Theory 2 (ECN407)
University: Université de Buéa
- Discover more from:
Students also viewed
- RC 46139157 08-14
- MOFA Project - Grade: 7,5
- Some Gen courses - vey good outline to help teachers and lecturers teach with .Its a lso good for
- Introduction to finance and Economics using various sampling techniques
- Introduction to finance and Economics using various sampling techniques
- 16-ME-tal 001 - Praticals