Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Prior to beginning work on this discussion forum, review the following from the Developing an Effective Evaluation Report: Setting the Course for Effective Program Evaluation

Prior to beginning work on this discussion forum, review the following from the Developing an Effective Evaluation Report: Setting the Course for Effective Program EvaluationLinks to an external site. workbook:

  • Part I, Step 3: Focus the Evaluation Design (pages 17 to 19)
  • Part I, Step 4: Gather Credible Evidence (pages 20 to 21)
image text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribed
Step 3: Focus the Evaluation Design The amount of information you can gather concerning your program is potentially limitless. Evaluations, however, are always limited by the number of questions that can be asked and answered realistically, the methods that can be employed, the feasibility of data collection, and the resources available. These issues are at the heart of Step 3 in the CDC Framework: Focus the Evaluation Design. The scope and depth of any program evaluation is dependent on program and stakeholder priorities, available resources including financial resources, staff and contractor skills and availability, and amount of time committed to the evaluation. Ideally, the program staff members and the ESW work together to determine the focus of the evaluation based on the considerations of stated purposes, priorities, stage of development, and feasibility. Therefore, the questions that guide the evaluation are those that are considered most important to program staff and stakeholders for program improvement and decision making. Even those questions that are considered most important, however, have to pass the feasibility test. A final evaluation report should include the questions that guided the evaluation, as well as the process through which certain questions were selected and others were not. Transparency is particularly important in this step. To enhance the evaluation's utility and propriety, stakeholders and users of the evaluation need to understand the roles of the logic model and the stage of development in informing evaluation questions. The stage of development discussed in the previous chapter illuminates why questions were or were not chosen. If the program is in the planning stage, for example, it is unlikely that outcome questions will be asked or can be answered as part of the evaluation. However, most stakeholders and decision makers are keenly interested in outcome questions and will be looking for those answers in the evaluation report. To keep stakeholders engaged, it may be helpful to describe when questions related to downstream effects might be answered. This is possible if a multiyear evaluation plan was established (CDC, 2011). The report should include discussion of both process and outcome results. Excluding process evaluation findings in favor of outcome evaluation findings often eliminates the understanding of the foundation that supports outcomes. Additional resources on process and outcome evaluation are identified in the Resources section of this workbook. Process evaluation focuses on the first three boxes of the logic model. Inputs Fb Activities }-} Outputs Process evaluation enables you to describe and assess your program's activities and link progress to outcomes. This is important because the link between outputs and outcomes (last three boxes) for your particular program remains an empirical question. (CDC, 2008, p. 3) Outcome evaluation, as the term implies, focuses on the last three outcome boxes of the logic model: Short-term, intermediate, and long-term outcomes. Short-Term QOutcomes Intermediate QOutcomes Long-Term Outcomes Outcome evaluation allows researchers to document health and behavioral outcomes and identify linkages between an intervention and quantifiable effects. (CDC, 2008, p. 3) Process and Outcome Evaluation in Harmony in the Evaluation Report: A discussion of both process and outcome results should be included in the report. Excluding process evaluation findings in favor of outcome evaluation findings often eliminates the understanding of the foundation that supports outcomes. Additional resources on process and outcome evaluation are identified in the Resources section of this workbook. Transparency about the selection of evaluation questions is crucial to stakeholder acceptance of evaluation results, and possibly for continued support of the program. If it is thought that some questions were not asked and answered to hide information, then it is possible that unwarranted negative consequences could result. The feasibility standard addresses issues of how much money, time, and effort can be expended on the evaluation. Sometimes, even the highest-priority questions cannot be addressed because they are not feasible due to data collection constraints, lack of staff expertise, or economic conditions. Therefore, it is essential to have a discussion with the ESW early in the process about the feasibility of addressing evaluation questions. It is important to be transparent both in the evaluation plan and report about feasibility issues related to how and why evaluation questions were chosen. Discussions of budget and resources (both financial and human) that can be allocated to the evaluation are likely to be included in the evaluation plan. Best Practices for Comprehensive Tobacco Control Programs2007' (CDC, 2007) recommends that at least 10% of your total program resources be allocated to surveillance and evaluation. In the final evaluation report, you may want to include the evaluation budget and an accompanying narrative that explains how costs were allocated. Including evaluation budget information and the roles and responsibilities of staff and stakeholders in the final report reflects the decisions regarding feasibility. The process through which you created the budget narrative may also enhance utility by assuring that the evaluation priorities, as well as future evaluation questions and resource requirements, are clearly outlined. ' This is an evidence-based guide to help states plan and establish effective tobacco control programs to prevent and reduce tobacco use. AT THIS POINT IN YOUR REPORT, YOU HAVE defined the purposes of the evaluation, described the evaluation stakeholder workgroup, described the program including context, created a shared understanding of the program, described the stage of development of the program, and discussed the focus of the evaluation through the lens of the logic model or program description and stage of development. Step 4: Gather Credible Evidence Now that you have described the focus of the evaluation and identified the evaluation guestions, it is necessary to describe the methods used in the evaluation and present the results. For evaluation results to be perceived as credible and reliable, content must be clear and transparent in the section of your report that describes methods used. It is important to note that the buy-in for methods begins in the planning stage with the formation of the ESW, follows throughout the implementation and interpretation phases, and continues throughout the report writing and communication phasesall with the aid of the ESW. CREDIBILITY OF THE EVALUATOR The credibility of the evaluator(s) can have an impact on how results and conclusions are received by stakeholders and decision makers and, ultimately, on how the evaluation information is used. Patton (2002) included credibility of the evaluator as one of three elements that determine the credibility of data. This is especially true if the evaluation is completed in house. Consider taking the following actions to facilitate the acceptance of the evaluator(s) and thus the evaluation: = Address credibility of the evaluator(s) with the ESW early in the evaluation process. = Be clear and transparent in both the evaluation plan and the final evaluation report. = Present periodic interim evaluation findings throughout the evaluation to facilitate ownership and buy-in of the evaluation and promote collaborative interpretation of final evaluation results. = Provide information about the training, expertise, and potential sources of biases of the evaluator(s) in the data section or appendices of the final evaluation report. The primary users of the evaluation should view the evidence you gathered to support the answers to your evaluation questions as credible. The determination of what is credible is often context dependent, and it can also vary across programs and stakeholders. The determination of credible evidence is tied to the evaluation design, implementation, and standards adhered to for data collection, analysis, and interpretation. When designing the evaluation, the philosophy should be that the methods that fit the evaluation questions are the most credible. Best practices for your program area and the evaluation standards of utility, feasibility, propriety, and accuracy included in the framework will facilitate the process of addressing credibility (CDC, 1999). It is important to fully describe the rationale for the data collection method(s) chosen in your evaluation report to increase the likelihood that results will be acceptable to stakeholders. It also strengthens the value of the evaluation and the likelihood that the information will be used for program improvement and decision making. Methods and data sources used in the evaluation should be fully described in the evaluation report. Any approach has strengths and limitations; these should be described clearly in the report along with quality assurance (QA) methods used in the implementation of the evaluation. QA methods are procedures used to ensure that all evaluation activities are of the highest achievable quality (International Epidemiological Association, 2008). Explaining QA methods facilitates acceptance of evaluation results and demonstrates that you considered the reliability and validity of methods and instruments. Reliable evaluation instruments produce evaluation results that can be replicated; valid evaluation instruments measure what they are supposed to measure (International Epidemiological Association, 2008). Your evaluation report should include a detailed explanation of anything done to improve the reliability and/or validity of your evaluation to increase transparency of evaluation results. RELIABILITY AND VALIDITY Indoor air quality monitoring has become a valuable tool for assessing levels of particulate matter before and after smoke-free policies are implemented. This documentation of air quality provides an objective measurement of secondhand smoke levels. Air quality monitoring devices must be calibrated before use to ensure that they are accurately measuring respirable suspended particles (RSPs), known as particulate matter. That is to say, the machine recordings must be reliable. Measurements should also be taken during peak business hours to reflect real-world conditions. That is to say, are the measurements valid? Quantitative and qualitative methods are both credible ways to answer evaluation questions. It is not that one method is right or wrong; rather, it is a question of which method or combination of methods will obtain valid answers to the evaluation questions and will best present data to promote clarity and use of information

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Construction Project Management A Complete Introduction

Authors: Alison Dykstra

2nd Edition

0982703430, 978-0982703434

More Books

Students also viewed these General Management questions