Question
Article:Petrisino, Anthony, et al.July 2000. Well-Meaning Programs Can Have Harmful Effects! Lessons from Experiments of Programs Such as Scared Straight Crime and Delinquency Vol. 46
Article:Petrisino, Anthony, et al.July 2000. "Well-Meaning Programs Can Have Harmful Effects! Lessons from Experiments of Programs Such as Scared Straight" Crime and DelinquencyVol. 46 No. 3 pp. 354-379.
Summarize the key points made regarding in-depth analyses, stories, rationalizations, etc. in the context of criminal justice systems and perhaps other public-service sectors as well.
Identify the group of studies that the authors primarily focused on to help illustrate many of their main points, give a brief explanation of what a Scared Straight program is, state what the authors discovered regarding the efficacy of such programs (include one or two specific outcomes cited in Appendix B of their article that are consistent with their general findings), and describe what in their opinion frequently happens in spite of their findings.
Randomized experiments are used frequently by the writers to evaluate the efficacy of their programs. List the various feasibility factors that may frequently rule out randomized trials but may still allow for some type of quasi-experimentation. (Hint: Review the Module 3 Lecture for a broad response to this; additionally, take into account the Kottlechuck and/or Robinson and Dye articles.)
Module 3 INTRODUCTION TO RESEARCH DESIGNS
As previously mentioned, an applied research proposal addresses:
- why do the study and what are the major research questions to be addressed (perhaps with reference to the findings of other studies or analyses)
In addition, it should summarize:
- the range of data to be collected
- how such data will be obtained (including data source(s))
- how the data will be analyzed (e.g., what statistical methods will be used)
- what kind of specific research design or type of study approach will be used to address the major research questions.
What is meant by specific research design or study approach? A specific research study design or study approach refers to the logical strategy for estimating how much, if at all, a dependent variable is affected by one or more independent variables. Consider ice cream sales and assaults. Suppose for a large city we collected daily data on the number of assault arrests and the number of ice cream sales during the months of June, July, and August for a given year or set of years.
Suppose we find that as the daily number of ice cream sales increases the number of assault arrests tends to increase as well. Can we conclude that ice cream consumption is probably causing the number of assaults to rise? No. If we also recorded average temperature for each day and analyzed the data a bit more (in a way we will discuss in the second half of the course), we quite probably would find that the apparent relationship disappears or is substantially reduced. It is quite possible that increased temperature increases both ice cream sales and assault arrests. The relationship between ice cream sales and assault arrests would be in this case spurious. Let us call this the "ice cream effect" whereby an apparent relationship between two variables for the most part disappears or is substantially reduced once another key variable is accounted or controlled for. We will consider how to assess this statistically during the second half of this course. The general theme here is that we often need a logical strategy for separating out the apparent effect of one independent variable (e.g., class size) upon another (e.g., academic performance) accounting for the effect of other independent variables (e.g., household income) that may somewhat overlap with the key independent variable (s) of interest.
In this module, we will consider two example designs or logical strategies: Cross-Sectional Statistical Design, and Classical Randomized Experimental Design (two group form).
CROSS-SECTIONAL DESIGNS
One common example of a specific research design is the cross-sectional approach. This approach involves collecting data on at least one dependent variable and a wide range of potential independent variables as of a certain point in time or single time period (e.g., over the past year as a whole ).
The Robinson and Dye study which included analysis of the relationship between the African American Representation index and type of district/ward system is an example of a cross-sectional design. For each city/city council in the data set, the study gathered much more data than that presented in Table 1 in the article. If you return to that study and look at Table 2, you will see that the authors collected data for each city/city council on a wide range of potential independent variables beyond type of district/ward system. For example, for each city, they collected data on median family income, median number of school years completed by the African American population over 25 years of age, and other key variables. At this point, it is beyond our scope to decipher exactly how they analyzed this data in Table 2. Suffice it to say that the authors were attempting to see if the apparent relationship between the Index and type of district/ward system presented in Table 1 still exists once other variables are accounted for. The idea in part is to avoid the potential "ice cream effect" in this situation.
Suppose we were interested in examining how much class size affects third grade student test scores on a given standardized test given at the end of the school year. To isolate the impact of class size on the dependent variable of interest (test score), we would gather data not only on class size but also on other variables that also potentially affect student test scores. Why? If there is an apparent relationship between class size and test score, we will want to know if this disappears or is substantially reduced once other potential independent variables such as family income are accounted for. Again, the idea is to do the best we can to avoid or at least minimize any ice cream effects.
As we will learn later in this course (e.g., when we consider multiple regression analysis), if done properly and thoughtfully this design can provide strongly suggestive evidence about how much an independent variable of special interest affects a dependent variable of concern. CLASSICAL, RANDOMIZED EXPERIMENTAL DESIGN (two group form)
If properly and fully implemented, classical randomized experiments provide the strongest evidence of a causal relationship. The simplest subtype of a classical, randomized experimental design is the two-group form which we now consider here.
The two group form consists of an experimental or treatment group that receives the intervention/new treatment (e.g., Scared Straight prison visits by youth at risk) and a group that does not receive the intervention/new treatment (either nothing at all or the conventional treatment).
Given a pool of subjects, how are the two groups created? The two groups are created via random assignment. Random assignment means subjects are assigned in a "lottery-like", unbiased manner. There are several ways to pursue this. Suppose there are 100 treatment spots and a pool of 300 subjects, one could simply list out the individual subjects (e.g., patients, cities, school districts) and assign every third one to the treatment group. There are other methods as well. What clearly would NOT be random would be for the selector to look at the name or location or other characteristics of each subject before plopping him/her/it into the treatment or control group. (Note: I use the word 'it' because the unit of analysis may not be individual persons but things such as books, counties, or residential properties.)
Why go through the trouble of randomization? Well, the aim of randomization is to ensure that the experimental/treatment group and the control group are similar in nature prior to implementation of the intervention. In practice, a major way to check the success of the randomization process is to compare the two groups statistically with respect to various characteristics that have been shown or theorized to affect the outcome of interest. For instance, depending on what is being studied, one might compare the experimental group to the control group in terms of percentage who are male vs. female, average age, and so forth. A successful randomization process (assuming a sufficient number of cases in both groups) should ideally yield no statistically significant differences between the two groups. (Note: we will consider statistical significance in the second half of the course.)
Why do we care about getting two similar groups? If the two groups are similar then presumably the only major difference between the two groups is the intervention/treatment. So, if the experimental group shows an outcome level or change in outcome level different from that of the control group, we can logically attribute that difference to the treatment. Another way to think of this is that the control group represents an estimate of what would have happened in the aggregate for the treatment group in the absence of the intervention.
A key independent variable in this design is group membership (i.e., is the subject in the treatment or control group). The dependent variable or variables reflect the outcome(s) of interest. The dependent variable or variables are always measured or assessed following exposure to the intervention; such variables are often assessed (but not always) prior to the intervention being implemented.
Suppose we have a pool of 1000 14-15 year olds who have been identified by the local court system as potentially appropriate for in prison visits as part of a Scared Straight program. Suppose only 300 can be given the intervention. We might then randomly assign 300 to the intervention (along with receiving regular counseling services) and the other 700 would not receive the intervention but would receive regular counseling services.
Suppose out of the 300 receiving the intervention, "only 20%" are arrested for a criminal offense over the next 12 months. Can we say that the intervention resulted in only a 20% recidivism rate? This would be way premature. Why? We need an estimate of what would have happened in the aggregate if the treatment group had not received the intervention. The above design has a control group that did not receive the intervention.
Suppose the control group also had "only 20%" arrested for a criminal offense over the next 12 months. This would strongly suggest that the Scared Straight intervention had no impact overall. On the other hand, if the control group had 30% arrested we would have evidence that the intervention had an impact of 10% (i.e., 30% recidivism if no intervention as suggested by the control group versus 20% by the treatment group who got the intervention). So, most in either group did not recidivate but there appears to be in this second scenario an additive effect from being in the intervention group.
What if a new crime prevention program had been implemented at the same time as the Scared Straight type program? Presumably, this "history" threat to internal validity (see pp.59-60) is not a problem here since the two groups are comparable in characteristics given random assignment and are both subject to the impact of this outside event. What if these youngsters naturally mature over time (see pp. 60-61) and are less likely to commit crimes? Presumably, this "maturation" threat to internal validity is not a problem either because of the randomization process (and the ability to compare the treatment to the control group).
As your book points out, one thing that needs to be checked is what is referred to as experimental mortality. As noted on p. 62 of the book, experimental mortality "arises when people (or entities) begin a program and later drop out before the study is completed. The difficulty with this is that dropouts may be different from those who complete the program, and the difference may affect the outcome."
Overall, however, if properly and fully implemented, randomized experimentation is considered the BMW of designs.
CONCLUDING POINTS
As mentioned above, randomized experiments if properly and fully implemented, provide the strongest evidence of a causal relationship. If properly and fully implemented, the randomized experiment provides substantial protection against ice cream effects. Randomization successfully done means that those receiving the treatment are comparable to those who did not receive the treatment. Certainly, if one is concerned about the impact of a certain set of policy or management interventions on various outcomes of interest, then seeking out relevant randomized experiments published in the research literature makes sense. One may find such studies dealing with matters as diverse as what is the impact of preventive services on reducing unnecessary placements into foster care to what is the impact of a pre-school program on adulthood in terms of high school completion, earnings, and crime rates.
However, for many questions randomized experimentation is often not feasiblepolitically, technically, legally, ethically, administratively, and/or cost-wise For instance, the impact of type of district/ward system for city councils upon minority representation is a valid policy question. It clearly is not politically or legally feasible for a central authority to randomly assign cities to district vs. mixed vs. at-large systems. Instead, either a cross-sectional design (as Dye and Robinson employed) or other approach (e.g., some form of quasi-experimental design as we will consider in Module 4) may be more feasible and can provide strongly suggestive evidence relevant to the issue at hand. However, these designs need to use various statistical methods to account for characteristic differences between the groups being compared as much as is possible (since the BMW method involving randomization cannot be utilized in such situations). We will consider a major example of such a statistical method (multiple regression analysis) during the second half of the course.
It should also be noted that ideally an agency analyst who is interested in research studies dealing with a question he/she confronts should examine multiple studies. The idea is to check for the degree of consistency of findings and to consider the range of persons/entities covered in those various studies.
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started