Question
Program-Planning-Guidebook.pdf (colostate.edu) Two Examples of Program Planning, Monitoring and Evaluation | PPT (slideshare.net) A program planning process is a set of steps that help you
Program-Planning-Guidebook.pdf (colostate.edu) Two Examples of Program Planning, Monitoring and Evaluation | PPT (slideshare.net)
A program planning process is a set of steps that help you design, implement and evaluate a program. Different sources may have different variations of the process, but some common steps are: ? Developing a program goal and objectives ? Identifying and ranking contributing factors ? Developing an intervention strategy ? Developing medium- and short-term objectives ? Developing an implementation plan ? Determining tracking and assessment methods ? Finalizing the plan
Program planning models are graphic representations that show the logical relationships between program conditions, inputs, activities, outcomes, and impacts 1 . They help identify program goals, objectives, activities, desired outcomes, and impacts 1 . There are different types of program planning models for different purposes and contexts. For example, the basic model is good for establishing your company's vision, mission, business objectives, and values 2 . The health promotion model is useful for designing interventions to improve health behaviors and outcomes 3 .
1. The program chosen is "opioid awareness"
2. Come up with a preliminary program planning formula (page 12) on the program you want (opioid addiction) to write a grant to fund. p2 = w5 x h2 x E
3. Then determine if you need to a needs assessment
13 CHAPTER want housing in a particular neighborhood so she can walk to her work and drop Off her children to a relative for care. Expressed need refers to attempts by the individuals to fulfill their needs. Individuals such as the aforementioned homeless mother placing names Of her family members on the waiting list for Title 8 housing assistance formulates the expressed need. Collectively, they become the service demand. Comparative need refers to the situation that an individual's condition is relatively worse off or less desirable than that Of other people. The fact that this homeless mother's two children have more infectious disease-related emergency room visits than children their ages who live in a stable home reflects the comparative need and urgency for better care for the homeless families. Rubin and Babbie (1997, p. 570) list several categories of needs assessment: (a) the key informants approach, (b) the community forum approach, (c) the rates-under-treatment approach, (d) the social indicators approach, (e) the community survey approach, and (f) the focus group approach. All of these approaches have their own advantages and pitfalls. Program planners may need to use more than one approach to gain a comprehensive or representative understanding. Key informants are individuals who have special knowledge about the identified issues. They range from the service providers to the service recipients and their caregivers. Through the use of interviews and questionnaires, their special insights and comments regarding the situations will be collected. Community forums such as open forums or town hall meetings provide a channel for concerned citizens to express and exchange their opinions. Rates-under-treatment refers to the estimate of future service needs based on the current client data, including service usages and the waiting list. This approach can be carried out by doing documentary studies of existing or secondary data. The social indicators approach utilizes the existing statistics and markers to study the condition of the target population. School dropout rate, infant mortality rate, and child abuse reports are examples of such an approach. Sumey of communities or target groups involves the use of both qualitative and quantitative research approaches to gather information directly from the target populations. Finally, focus groups are usually made up of 12 to 15 people who represent the intended target population. Through a set of guided focus group questions, they provide thoughtful responses to the topics. Additionally, the group discussions and dynamics also bring about new ideas and discussions that the planners have not thought about. Quantitative and Qualitative Approaches Part of the misunderstanding of program evaluation and social research is that evaluation and research is all about numbers. Many believe that when using numbers, they apply statistics and a quantitative way of dealing. Furthermore, when statistics are involved, program evaluation is thought to be for science-minded people such as mathematicians, statisticians, or program evaluators. Some human service providers may argue they should concentrate on what they do best, such as dealing xadth people and improving their well being. They are better trained and TWO capable in more qualitative domains and should let the evaluators or managers deal with the headaches of scientific-evaluation. Certainly, there are many faults in the logic just described. Nevertheless, they reflect the resistance and misunderstanding among many human service providers who believe they are "people people" not "science people"; that they should not be concerned about, and are not prepared to deal with, program evaluation. In reality, depending on the research design, a program evaluation's involvement of statistics could range from few to plenty. Additionally, program evaluation is not for the wrongfully p PLANNING, EVALUATING, AND GRANT WRITING 14 stereotyped science nerds only. It is a lively and exciting process that many human service providers have the needed expertise to contribute to and enjoy. Royse, Thyer, Padgett, and Logan (2001) state that "program evaluation is a practical endeavor, not an academic exercise, and not an attempt to build theory or necessarily to develop social science knowledge" (p. 2). Program evaluation employs both quantitative approaches and qualitative approaches. Each approach provides the needed and unique set of data to evaluate the program. Many quantitative approaches employ the deductive ways of evaluating programs; while qualitative approaches use many of the inductive methods. Often, both approaches are used to provide a comprehensive understanding and evaluation of programs. In writing a grant proposal, the applicant agency is making an argument that it has the understanding of the identified problems and knows how to address the problems. The grant proposal is then the agency's plan of intervention that follows the argument. Moore (1998) defines an argument "as a group of statements, one of whichthe conclusionis claimed to follow the ethersthe premises" (p. 2). Therefore, an argument "consists of three partsa group of premises, a conclusion, and an implicit claim" (p. 5). In the case of a program proposal, the group of premises is the basic belief and understanding that support the proposed program. The conclusion is the group of proposed interventions and services. The claim is the expected outcomes. Program evaluation, therefore, is an attempt to assess the proposed argument. Evaluation is the assessment and investigation of whether the conclusions follow the premises and that the claims can be substantiated. Conversely, an evaluation could examine whether premises support conclusions. This concept of "argument\" provides the basis for the use of a logic model that is to be discussed in Chapter 3. Moore (1998) further explains the difference between deductive argument and inductive argument. She cites the classic example of deductive reasoning that "All men are mortal. Socrates is a man. Therefore, Socrates is moltal" (p. 5). From a general truth or premise, a person draws a specific conclusion. The premises, in fact, contain more information than the conclusion, and the conclusion follows from the premises. In this situation, no matter how much more information is available on Socrates, he is still mortal and the conclusion still stands, Deductive reasoning provides a more precise and confident assertion than that from inductive reasoning. In an inductive argument example, individual specific situations are used to make the generalization "Socrates was mortal. Sappho was mortal. Cleopatra was mortal. Therefore, all people are mortal" (Moore, 1998, p. 6). The premises of three people's situations become the evidence and the basis for the conclusion that applies to all people. The conclusion carries more information than the premises, and could be altered as new information arises and becomes incorporated into the body of knowledge. The inductive conclusion is more of the nature of the probable, correlative, or contributory than that of a more causal determination of deductive reasoning. Qualitative and quantitative evaluation approaches complement each other, in that they help tell the program "stories" more comprehensively with both structure and contents, with hard data and human touch. Most evaluation efforts employ both approaches at the same time. Quantitative and Deductive Approaches Quantitative approaches of evaluation usually involve the application of experimental or quasiexperimental research designs that may include the use of control and experimental groups. This first involves defining the research or evaluation question, and then developing hypotheses. Part of the research interest is the presence and absence of certain interventions or independent variables. The effects of the interventions on certain dependent variables (i.e., results) are objectively evaluated and measured. Often this process involves a larger number of respondents 15 CHAPTER who make up the sample size and the use of wvalidated data collection procedures and instruments. Program proposal often starts from a particular program philosophy or belief, which is further operationalized into a working hypothesis. Within the context of a program philosophy, the proposal will further derive program goals and objectives and to develop activities. It is expected that these planned activities will bring about the desirable changes that support the program philosophy or belief. Many human service providers believe the behavior of at-risk youths is the result of a lack of significant adults that could provide meaningful relationships and proper guidance. Based on this belief and the working hypothesis (i.e., youth who have a regular association with a positive adult figure are less likely to get involved in high-risk behaviors), the agency organizes a 12-month youth mentoring program (independent variable and intervention), structured through several major objectives. During this twelve month program period, the agency uses a variety of evaluation strategies, including probability sampling, pre- and postmeasures (preexperimental or experimental designs), and service recording to assess the program's performance. The agency wants to know if and how the mentoring program works. It gathers information and evaluation data from program participants and other relevant information sources. Deductively, the agency compiles and analyzes data from a large number Of the program participants. Through the results of evaluation, the agency develops both general and specific knowledge. It can claim that a data-supported finding, such as the effects of the agency's mentoring program, suggests that such intervention can help bring at-risk youth back to school. If Johnny is an at-risk youth new to the program, the agency can apply the previous evaluation findings and make a confident generalization or assertion that the mentoring program will be helpful for Johnny. Many variablesincluding the basic unpredictable nature of human beings and the ethical concerns of manipulation of human variablesmake pure deductive and quantitative evaluation approaches involving human subjects very difficult and complicated tasks. They require extensive oversight and sufficient resources and funding. Everyday program evaluations, however, do not require the same Hnd of rigor like many academic or medical researches and evaluations. Logical and well controlled experimental designs are not only feasible for many program evaluation tasks, but also beneficial to program monitoring and improvement. Qualitative and Inductive Approaches Royse (1999, p. 278) describes the characteristics of qualitative research. First, it does not involve interventions, experimental designs, or manipulations of variables. Second, it is naturalistic in that it studies subjects in their own natural environment as they grow and change. Similarly, the research question evolves and changes. Third, participant observation provides the in-depth understanding of the identified issues through observations and interviews. Fourth, TWO qualitative research does not require large sample size, and the small sample size could also yield valuable information. Fifth, there is little use of measurement and numeric values. Through the use of his or her eyes and ears, the researcher is the tool of measurement. Sixth, the use of journalistic narrative helps the researchers to have an in-depth and detailed look of a situation. Seventh, much of the qualitative research is exploratory in nature. Finally, the researcher is a learner but not an expert. Yegidis, Weinbach, Morrison-Rodriguez (1999, p. 127) have similar assertion of qualitative research methods. The qualitative approaches are: (a) subjective, (b) designed to seek understanding rather than explanation, (c) reliant on inductive logic, (d) designed to generate PLANNING, EVALUATING, AND GRANT WRITING hypotheses, (e) designed to process data as received, and (f) made to use the researcher a the data collection instrument. Qualitative evaluation provides many qualities that quantitative evaluation is unable to offer. Most importantly, it has the human face in its evaluation findings. Qualitative evaluation employs a variety Of theoretical approaches and data collection methods. To name a few, one can find grounded theory approaches, ethnographic studies, case studies, observations, interviews, and focus group. Through these data collection methods, qualitative evaluations look into the relations and dynamics of what happened during the course of the service delivery. They are interested in the subjective reality and personal account of experience. Unlike quantitative approaches, hypotheses are not predetermined. Hypotheses, if any, may arise as a result of the data collected or from the insight developed dufing the data collection process. Qualitative evaluation is a more empowering evaluation approach. Populations under study often are involved in the data collection process as collaborators than as subjects of studies. Formative or process information of the program and its participants can provide feedback for the program and the people in a rather timely manner. People are more engaged in the process and more informed about the preliminary results. Qualitative evaluation is also more suitable to assess variables or situations that are hard to quantify or more subtle issues such as culturally specific behaviors and personal morale. Collecting data in natural settings allows qualitative evaluation to detect subtle and nonconventional behaviors more easily. Inductive reasoning supports many of the qualitative evaluations. Inductively, people can learn about situations and make generalizations through a variety of patterns. Moore (1998) lists "the argument by analogy, inductive generalization, hypothetical reasoning, and the causal argument\" (p. 7). She further explains that learning through analogy involves the use of similar situations to comprehend new or little known situation. Using information from a member of a set to make generalization to all members of that set is an inductive generalization. Hypothetical reasoning uses evidence logically to test against the conclusion in a more "scientific" approach. Causal argument is a special type of hypothetical reasoning that uses evidence to logically establish or against a claim that one event causes another. These patterns of inductive reasoning provide a variety of ways for designing and conducting qualitative evaluation. Summary Grant writing is a specific product of the program planning process. This chapter presents a simple program planning formula. There are differences between a grant and a contract; nevertheless, they are both attempts to address particular service needs. Needs assessment is usually the first step Of any planning and grant writing efforts. There are both qualitative and quantitative ways to identify, understand, and express the needs that are targeted by the planning and evaluation tasks. References Coley, S., & Scheinberg, C. (1990). Proposal writing. Newbury Park, CA: Sage. Lewis, J., Lewis, M., Packard, T., & Souflee, F. (2001). Management of human service programs (3rd ed.). Belmont, CA: Brooks/Cole. Mayer, R. (1985). Policy and program planning: A developmental perspective. Englewood Cliffs, NJ: Prentice Hall. Moore, K. (1998). Patterns of inductiue reasoning, (4th ed.). Dubuque, IA: Kendall/Hunt Publishing Company Royse, D. (1999). Research methods in social work (3rd ed.). Belmont, CA: Brooks/Cole. Royse, D., Thyer, B. , Padgett, D., & Logan, T. (2001). Program evaluation: An introduction (3rd ed.). Belmont, CA: Brooks/Cole. 16Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started