Skip navigation
The Habeas Citebook: Prosecutorial Misconduct - Header

Washington State Inst for Public Policy Evidence-based Adult Corrections Programs What Works and What Does Not 2006

Download original document:
Brief thumbnail
This text is machine-read, and may contain errors. Check the original document to verify accuracy.
Washington State
Institute for
Public Policy
110 Fifth Avenue Southeast, Suite 214

•

PO Box 40999

•

Olympia, WA 98504-0999 •

(360) 586-2677

•

www.wsipp.wa.gov

January 2006

EVIDENCE-BASED ADULT CORRECTIONS PROGRAMS:
WHAT WORKS AND WHAT DOES NOT‡
In recent years, public policy decision-makers
throughout the United States have expressed
interest in adopting “evidence-based” criminal
justice programs. Similar to the pursuit of
evidence-based medicine, the goal is to improve
the criminal justice system by implementing
programs and policies that have been shown to
work. Just as important, research findings can
be used to eliminate programs that have failed
to produce desired outcomes. Whether for
medicine, criminal justice, or other areas, the
watchwords of the evidence-based approach to
public policy include: outcome-based
performance, rigorous evaluation, and a positive
return on taxpayer investment.
This report to the Washington State Legislature
summarizes our latest review of evidence-based
adult corrections programs. We previously
published a review on this topic in 2001.1 In this
study, we update and significantly extend our
earlier effort.
The overall goal of this research is to provide
Washington State policymakers with a
comprehensive assessment of adult corrections
programs and policies that have a proven ability
to affect crime rates.
We are publishing our findings in two
installments. In this preliminary report, we
provide a systematic review of the evidence on
what works (and what does not) to reduce crime.
In a subsequent final report, to be published in
October 2006, we will extend this analysis to
include a benefit-cost estimate for each option.

‡

Suggested citation: Steve Aos, Marna Miller, and
Elizabeth Drake. (2006). Evidence-Based Adult Corrections
Programs: What Works and What Does Not. Olympia:
Washington State Institute for Public Policy.

1

S. Aos, P. Phipps, R. Barnoski, and R. Lieb (2001). The Comparative
Costs and Benefits of Programs to Reduce Crime, Olympia: Washington
State Institute for Public Policy.

Summary
This study provides a comprehensive
review of evidence-based programs for
adult offenders. We asked a simple
question: What works, if anything, to
lower the criminal recidivism rates of
adult offenders? To provide an answer,
we systematically reviewed the
evidence from 291 rigorous evaluations
conducted throughout the United States
and other English-speaking countries
during the last 35 years.
We find that some types of adult
corrections programs have a
demonstrated ability to reduce crime,
but other types do not. The implication
is clear: Washington’s adult corrections
system will be more successful in
reducing recidivism rates if policy
focuses on proven evidence-based
approaches.

Washington’s Offender Accountability Act
This research was undertaken as part of our
evaluation of Washington’s Offender
Accountability Act (OAA). Passed in 1999, the
OAA affects how the state provides community
supervision to adult felony offenders. In broad
terms, the OAA directs the Washington State
Department of Corrections to do two things:
1) Classify felony offenders according to their
risk for future offending as well as the
amount of harm they have caused society
in the past; and
2) Deploy more staff and rehabilitative
resources to higher-classified offenders
and—because budgets are limited—spend
correspondingly fewer dollars on lowerclassified offenders.

When the Legislature enacted the OAA, it defined
a straight-forward goal for the Act: to “reduce the
risk of reoffending by offenders in the
community.”2 To determine whether the OAA
results in lower recidivism rates, the Legislature
also directed the Washington State Institute for
Public Policy (Institute) to evaluate the impact of
the Act.3
Whether the OAA is able to affect crime rates will
depend, in part, on the policy and programming
choices made to implement the Act. As we show
in this report, there are some adult corrections
programs that have a demonstrated ability to
reduce crime, but there are other types of
programs that fail to affect crime rates. Given
these mixed results, it is reasonable to conclude
that the OAA (or any other adult corrections policy
initiative) will be successful in reducing crime only
if it encourages the implementation of effective
approaches and discourages the use of
ineffective programs. The purpose of this report
is to assist policymakers in sorting through the
many evidence-based choices.

The Evidence-Based Review: The Basic
Question
The goal of the present study is to answer a
simple question: Are there any adult corrections
programs that work? Additionally, in order to
estimate costs and benefits, we seek to estimate
the magnitude of the crime reduction effect of
each option.
To answer these fundamental questions, we
conducted a comprehensive statistical review of
all program evaluations conducted over the last
40 years in the United States and other Englishspeaking countries. As we describe, we found
291 evaluations of individual adult corrections
programs with sufficiently rigorous research to
be included in our analysis. These evaluations
were of many types of programs—drug courts,
boot camps, sex offender treatment programs,
and correctional industries employment
programs, to name a few.

corrections programs; rather, almost all of the
evaluations in our review were of programs
conducted in other locations. A primary purpose
of our study is to take advantage of all these
rigorous evaluations and, thereby, learn whether
there are conclusions that can allow
policymakers in Washington to improve this
state’s adult criminal justice system.

Research Methods
The research approach we employ in this report
is called a “systematic” review of the evidence.
In a systematic review, the results of all rigorous
evaluation studies are analyzed to determine if,
on average, it can be stated scientifically that a
program achieves an outcome. A systematic
review can be contrasted with a so-called
“narrative” review of the literature where a writer
selectively cites studies to tell a story about a
topic, such as crime prevention. Both types of
reviews have their place, but systematic reviews
are generally regarded as more rigorous and,
because they assess all available studies and
employ statistical hypotheses tests, they have
less potential for drawing biased or inaccurate
conclusions. Systematic reviews are being used
with increased frequency in medicine, education,
criminal justice, and many other policy areas.4
For this report, the outcome of legislative
interest is crime reduction. In particular, since
the programs we consider in this review are
intended for adult offenders already in the
criminal justice system, the specific outcome of
interest is reduction in recidivism rates.
Therefore, the research question is
straightforward: What works, if anything, to lower
the recidivism rates of adult offenders?
As we describe in the Appendix, we only include
rigorous evaluation studies in our review. To be
included, an evaluation must have a nontreatment comparison group that is well matched
to the treatment group.

It is important to note that only a few of these
291 evaluations were of Washington State adult
2

RCW 9.94A.010.
The Institute’s first five publications on the Offender Accountability Act
are available for downloading at the Institute’s website:
www.wsipp.wa.gov. The final OAA report is due in 2010.

3

2

4

An international effort aimed at organizing systematic reviews is the
Campbell Collaborative—a non-profit organization that supports
systematic reviews in the social, behavioral, and educational arenas.
See: http://www.campbellcollaboration.org.

Researchers have developed a set
of statistical tools to facilitate
systematic reviews of the evidence.
The set of procedures is called
“meta-analysis,” and we employ that
methodology in this study.5 In the
Technical Appendix to this report
(beginning on page 9) we list the
specific coding rules and statistical
formulas we use to conduct the
analysis—technical readers can find
a full description of our methods and
detailed results.

Exhibit 1

Adult Corrections: What Works?
Estimated Percentage Change in Recidivism Rates
(and the number of studies on which the estimate is based)
Example of how to read the table: an analysis of 56 adult drug court
evaluations indicates that drug courts achieve, on average, a statistically
significant 10.7 percent reduction in the recidivism rates of program
participants compared with a treatment-as-usual group.

Programs for Drug-Involved Offenders
Adult drug courts

-10.7%

(56)

In-prison “therapeutic communities” with community aftercare

-6.9%

(6)

In-prison “therapeutic communities” without community aftercare

-5.3%

(7)

Cognitive-behavioral drug treatment in prison

-6.8%

(8)

-12.4%

(5)

-6.0%

(9)

0.0%

(11)

-8.2%

(25)

0.0%

(9)

Drug treatment in the community

Findings
The findings from our systematic
review of the adult corrections
evaluation literature are summarized
on Exhibit 1.6 We show the expected
percentage change in recidivism
rates for many types of evaluated
adult corrections programs. A zero
percent change means that, based
on our review, a program does not
achieve a statistically significant
change in recidivism rates compared
with treatment as usual.

Drug treatment in jail

Programs for Offenders With Co-Occurring Disorders
Jail diversion (pre- and post-booking programs)

Programs for the General Offender Population
General and specific cognitive-behavioral treatment programs

Programs for Domestic Violence Offenders
Education/cognitive-behavioral treatment

Programs for Sex Offenders
Psychotherapy for sex offenders

0.0%

(3)

Cognitive-behavioral treatment in prison

-14.9%

(5)

Cognitive-behavioral treatment for low-risk offenders on probation

-31.2%

(6)

0.0%

(2)

Behavioral therapy for sex offenders

Intermediate Sanctions

We found a number of adult
corrections programs that have a
demonstrated ability to achieve
reductions in recidivism rates. We
also found other approaches that do
not reduce recidivism. Thus, the first
basic lesson from our evidencebased review is that some adult
corrections programs work and some
do not. A direct implication from
these mixed findings is that a
corrections policy that reduces
recidivism will be one that focuses
resources on effective evidencebased programming and avoids
ineffective approaches.
As an example of the information on
Exhibit 1, we analyzed the findings
from 25 well-researched cognitive-

Intensive supervision: surveillance-oriented programs

0.0%

(24)

-21.9%

(10)

Adult boot camps

0.0%

(22)

Electronic monitoring

0.0%

(12)

Restorative justice programs for lower-risk adult offenders

0.0%

(6)

Intensive supervision: treatment-oriented programs

Work and Education Programs for the General Offender Population
Correctional industries programs in prison

-7.8%

Basic adult education programs in prison

-5.1%

(7)

Employment training and job assistance in the community

-4.8%

(16)

-12.6%

(3)

Vocational education in prison

Program Areas in Need of Additional Research & Development
(The following types of programs require additional research before it can be concluded
that they do or do not reduce adult recidivism rates)
Case management in the community for drug offenders

0.0%

(12)

-27.4%

(2)

Faith-based programs

0.0%

(5)

Domestic violence courts

0.0%

(2)

Intensive supervision of sex offenders in the community

0.0%

(4)

Mixed treatment of sex offenders in the community

0.0%

(2)

Medical treatment of sex offenders

0.0%

(1)

-31.6%

(1)

Regular parole supervision vs. no parole supervision

0.0%

(1)

Day fines (compared to standard probation)

0.0%

(1)

-5.6%

(4)

“Therapeutic community” programs for mentally ill offenders

COSA (Faith-based supervision of sex offenders)
5

We follow the meta-analytic methods described in:
M. W. Lipsey and D. Wilson (2001). Practical
meta-analysis. Thousand Oaks: Sage Publications.
6
Technical meta-analytical results are presented in
Exhibit 2.

(4)

Work release programs

3

behavioral treatment programs for general adult
offenders. We found that, on average, these
programs can be expected to reduce recidivism
rates by 8.2 percent. That is, without a
cognitive-behavioral program we expect that
about 49 percent of these offenders will
recidivate with a new felony conviction after an
eight-year follow-up. With a cognitive-behavioral
treatment program, we expect the recidivism
probability to drop four points to 45 percent—an
8.2 percent reduction in recidivism rates.
It is important to note that even relatively small
reductions in recidivism rates can be quite costbeneficial. For example, a 5 percent reduction
in the reconviction rates of high risk offenders
can generate significant benefits for taxpayers
and crime victims. Moreover, a program that
has no statistically significant effect on
recidivism rates can be cost-beneficial if the cost
of the program is less than the cost of the
alternative. Jail diversion programs are
examples of this; even if research demonstrates
that diversion programs have no effect on
recidivism, the programs may still be
economically attractive if they cost less than
avoided jail costs. In the final version of this
report, to be delivered to the Legislature in
October 2006, we will present full benefit-cost
estimates for each of the programs shown in
Exhibit 1.7

Findings by Type of Program
We organized our review of the adult corrections
evidence base into eight categories of correctional
programming (as shown in Exhibit 1). A brief
discussion of our findings for each of these
categories follows.

Programs for Drug-Involved Offenders. We
analyzed 92 rigorous evaluations of drug
treatment programs. These programs are for
drug-involved adult offenders in a variety of prison
and community settings. We found that, on
average, drug treatment leads to a statistically
significant reduction in criminal recidivism rates.
We examined adult drug courts, in-prison
therapeutic communities, and other types of drug
7

An overview of what will be included in the October 2006 report can be
found at www.wsipp.wa.gov/ Steve Aos (2006). Options to Stabilize
Prison Populations in Washington State, Interim Report, Olympia:
Washington State Institute for Public Policy.

4

treatment including cognitive-behavioral
approaches.
Adult Drug Courts. Specialized courts for druginvolved offenders have proliferated throughout
the United States, and there are several adult
drug courts in Washington. We found 56
evaluations with sufficient rigor to be included in
our statistical review. We conclude that drug
courts achieve, on average, a statistically
significant 10.7 percent reduction in the recidivism
rates of program participants relative to treatmentas-usual comparison groups.
In-Prison Therapeutic Communities. Programs
for drug offenders in a prison or jail setting are
typically called “therapeutic communities” when
they contain separate residential units for the
offenders and when they follow group-run
principles of organizing and operating the drugfree unit. Some evaluations of the effectiveness
of in-prison therapeutic community programs have
also included community-based aftercare for
offenders once they leave incarceration. Based
on our review of the evaluation literature, we
found that the average therapeutic community
reduces recidivism by 5.3 percent. The
community aftercare component, however,
produces only a modest additional boost to
program effectiveness—to a 6.9 percent
reduction. Thus, most of the recidivism reduction
effect appears to stem from the prison-based
therapeutic community experience for these
offenders.
Other Types of Drug Treatment. As shown in
Exhibit 1, we also studied the effects of three
other types of drug treatment modalities: prisonbased drug treatment that employs a cognitivebehavioral approach, general drug treatment
approaches in the community, and general drug
treatment programs in local jails. We found that
each of these approaches achieve, on average, a
statistically significant reduction in recidivism.

Jail Diversion Programs for Offenders With
Mental Illness and Co-Occurring Disorders.
There is young but growing research literature
testing the effectiveness of jail diversion programs
for mentally ill adults and for offenders with cooccurring mental health and substance abuse
disorders. Some of these are pre-booking
programs implemented by the police, and some
are post-booking programs implemented by court
personnel, such as mental health courts. We
found 11 evaluations with sufficient research rigor

to be included in our review. Eight of these
programs were part of a recent federally-funded
effort (Broner et al., 2004). On average, these
approaches have not demonstrated a statistically
significant reduction in the recidivism rates of
program participants. This null finding does not
mean the programs are not valuable; since they
are typically designed to divert offenders from
costly sentences in local jails, they may save
more money than the programs cost. As
mentioned earlier, we will review the economics of
all programs in the present study in our October
2006 final report.

Treatment Programs for the General
Offender Population.
Cognitive-Behavioral Treatment. We found 25
rigorous evaluations of programs for the general
offender population that employ cognitivebehavioral treatment. This type of group therapy
addresses the irrational thoughts and beliefs that
lead to anti-social behavior. The programs are
designed to help offenders correct their thinking
and provide opportunities to model and practice
problem-solving and pro-social skills. On
average, we found these programs significantly
reduce recidivism by 8.2 percent. We identified
three well-defined programs that provide
manuals and staff training regimens: Reasoning
and Rehabilitation (R&R), Moral Reconation
Therapy (MRT), and Thinking for a Change
(T4C). Effects of R&R and MRT are significant
and similar to each other and to the other
cognitive-behavioral treatment programs in our
review. Only a single evaluation of T4C is
currently available. Since, on average, all of
these programs produce similar results, we
recommend the state choose any of the three
well-defined programs for implementation in
Washington.

Programs for Domestic-Violence Offenders
Education/Cognitive-Behavioral Treatment.
Treatment programs for domestic violence
offenders most frequently involve an educational
component focusing on the historical oppression
of women and cognitive-behavioral treatment
emphasizing alternatives to violence. Treatment
is commonly mandated by the court. Based on
our review of nine rigorous evaluations, domestic
violence treatment programs have yet, on
average, to demonstrate reductions in recidivism.

Programs for Sex Offenders.8 We found 18
well-designed evaluations of treatment programs
for sex offenders. Some of these programs are
located in a prison setting and some are in the
community. Sex offenders sentenced to prison are
typically convicted of more serious crimes than
those sentenced to probation. We found that
cognitive-behavioral treatments are, on average,
effective at reducing recidivism, but other types of
sex offender treatment fail to demonstrate
significant effects on further criminal behavior.
Psychotherapy/Counseling for Sex Offenders.9
These programs involve insight-oriented individual
or group therapy or counseling. We found only
three rigorous studies of this approach to
treatment. The results indicate that this approach
does not reduce recidivism in sex offenders.
Cognitive-Behavioral Treatment of Sex Offenders
in Prison. Sex offenders sentenced to prison are
typically convicted of more serious crimes than
those sentenced to probation. We examined five
rigorous studies of these specialized cognitivebehavioral programs that may also include
behavioral reconditioning to discourage deviant
arousal, and modules addressing relapse
prevention. Among the five programs in this
category was a randomized trial10 with an eightyear follow-up showing small but non-significant
effects on recidivism. On average across all five
studies, however, we found that cognitivebehavioral therapy for sex offenders in prison
significantly reduces recidivism by 14.9 percent.
Cognitive-Behavioral Treatment of Low-Risk Sex
Offenders on Probation. Offenders sentenced to
probation have usually been convicted of less
serious crimes than sex offenders sentenced to
prison. Cognitive-behavioral programs for sex
offenders on probation are similar to the programs
in prisons, and may also incorporate behavioral
reconditioning and relapse prevention. We found
six rigorous studies and conclude that cognitive8

The categories of sex offender treatment listed here are based on
those outlined in two recent reviews of sex offender treatment literature:
R. K. Hanson, A. Gordon, A. J. Harris, J. K. Marques, W. Murphy, V. L.
Quinsey, and M. C. Seto (2002). First report of the collaborative
outcome data project on the effectiveness of psychological treatment for
sex offenders, Sexual Abuse: A Journal of Research and Treatment,
14(2): 169-194; F. Losel, and M. Schmucker (2005). The effectiveness
of treatment for sexual offenders: A comprehensive meta-analysis,
Journal of Experimental Criminology, 1: 117-146
9
Psychotherapy and counseling are not currently used as stand-alone
treatment for sex offenders (Hanson, et al., 2002).
10
J. K. Marques, M. Wiederanders, D. M. Day, C. Nelson, and A. van
Ommeren (2005). Effects of a relapse prevention program on sexual
recidivism: Final results from California's Sex Offender Treatment and
Evaluation Project (SOTEP), Sexual Abuse: A Journal of Research and
Treatment, 17(1): 79-107.

5

behavioral therapy for sex offenders on probation
significantly reduces recidivism. As a group, these
programs demonstrated the largest effects
observed in our analysis.
Behavioral Treatment of Sex Offenders. Behavioral
treatments focus on reducing deviant arousal
(using biofeedback or other conditioning) and
increasing skills necessary for social interaction
with age appropriate individuals. The two rigorous
studies of programs using only behavioral
treatment failed to show reductions in recidivism.

Intermediate Sanctions. In the 1980s and 1990s a
number of sanctioning and sentencing alternatives
were proposed and evaluated. Interest in
developing additional alternatives continues. We
found studies that center on five types of these
“intermediate” sanctions.
Intensive Supervision With and Without a Focus on
Treatment. We found 24 evaluations of intensive
community supervision programs where the focus
was on offender monitoring and surveillance. These
programs are usually implemented by lowering the
caseload size of the community supervision officer.
This approach to offender management has not, on
average, produced statistically significant reductions
in recidivism rates. On the other hand, intensive
supervision programs where the focus is on
providing treatment services for the offenders have
produced significant reductions; we found 10 wellresearched evaluations of treatment-oriented
intensive supervision programs that on average
produced considerable recidivism reductions. The
lesson from this research is that it is the treatment—
not the intensive monitoring—that results in
recidivism reduction.
Adult Boot Camps. Boot camps are intensive
regimens of training, drilling, and some treatment.
We found 24 rigorous evaluations of adult boot
camps and, on average, they do not produce a
statistically significant reduction in re-offense rates.
As with our comment on jail diversion programs,
however, it is possible that boot camps are
economically attractive if they cost less to run than
the alternative. Our October 2006 report will
analyze the economics of adult boot camps.
Electronic Monitoring. Supervision of offenders in
the community that is aided with electronic
monitoring devices has been the focus of some
rigorous evaluation efforts. We found 12 controlgroup studies; on average they indicate that
electronic monitoring does not reduce recidivism.
6

Restorative Justice for Lower-Risk Adult
Offenders. Restorative justice approaches have
been tried for both juvenile and adult offenders.
Offenders placed in restorative justice programs
are often, but not always, lower risk compared with
offenders processed through the usual court
procedures. Restorative justice typically involves
a form of victim-offender mediation, family group
conferences, or restitution. We found six rigorous
evaluations of these programs for adult offenders.
On average, they did not result in lower recidivism
rates. Our October 2006 report will also report on
restorative justice programs for juvenile offenders.
Unlike our findings for the restorative justice
programs for adult offenders, our preliminary
findings indicate that restorative justice programs
do achieve significant reductions in recidivism
rates of lower-risk juvenile offenders.

Work and Education Programs for General
Offenders. We found 30 rigorous evaluations of
programs that attempt to augment the
educational, vocational, and job skills of adult
offenders. Some of these programs are for
offenders in prison and some are in community
settings. On average, we found that employmentand education-related programs lead to modest
but statistically significant reductions in criminal
recidivism rates. We examined the following five
categories of these programs.
In-prison Correctional Industries Program. Most
states run in-prison correctional industries
programs, yet only a few have been evaluated
rigorously. We located only four outcome
evaluations of correctional industries programs.
On average, these programs produce a
statistically significant reduction in recidivism
rates. Our updated economic analysis of this
finding will be presented in October 2006.
Basic Adult Education Programs in Prison. We
found seven rigorous evaluations of programs that
teach remedial educational skills to adult
offenders when they are in prison. On average,
these programs reduce the recidivism rates of
program participants.
Employment Training and Job Assistance
Programs in the Community. We analyzed the
results of 16 rigorous evaluations of communitybased employment training, job search, and job
assistance programs for adult offenders. These
programs produce a modest but statistically
significant reduction in recidivism.

Vocational Education Programs in Prison. We
found only three quality studies of vocational
training programs for offenders while they are in
prison. On average, the programs appear to
reduce recidivism, but additional tests of this
tentative finding is necessary.

Programs Requiring Further Study. In our
review of the adult corrections literature, we were
unable to draw conclusions about recidivism
reduction for a number of programs. In Exhibit 1,
we list these inconclusive findings at the bottom of
the table. For each of these approaches, further
research is required before even tentative
conclusions can be drawn.11
Case Management in the Community for Drug
Offenders. These types of programs typically
involve an outside third-party agency that
provides case coordination services and drug
testing. The goal is to provide the coordination of
other existing monitoring and treatment services
for offenders in the community. We found 12
rigorous tests of this approach. Our statistical
tests reveal that while, on average, these
programs have no significant effect on recidivism,
some case management programs do have an
effect and some do not. This inconclusive result
means that additional research is required on this
class of programming in order to identify the
aspects of case management that are effective or
ineffective. In other words, additional research
may indicate that some forms of case
management reduce recidivism.12
“Therapeutic Community” Programs for Mentally
Ill Offenders. A relatively new approach to
providing treatment to mentally-ill offenders
follows a modified version of the therapeutic
community approach to drug offenders
described earlier. This approach appears to
show promise in reducing recidivism rates.
11

Technical Note. As we explain in the technical appendix, we employ
“fixed effects” and “random effects” modeling to derive meta-analytic
estimates of program effectiveness. Sometimes, a collection of
evaluations of similar programs has significant recidivism when judged
with fixed effects modeling, but the same set of programs has
insignificant findings when a random effects model is used. This
situation provides an indication that additional meta-analytic research is
needed to identify the factors that produced the heterogeneity in the
outcomes. Several of the programs listed here fall into this category.
For more information, see the technical appendices.
12
As a technical note, Exhibit 2 shows that case management services
produce a marginally significant (p=.114) effect on recidivism in a fixed
effects model but the model indicates significant (p=.000) heterogeneity.
The random effects model indicates non significance (p=.48). Thus, a
multivariate meta-analysis of this literature may isolate the factors that
were associated with successful approaches among the 12 studies.

However, this is based on only two rigorous
studies, and they involved small samples of
offenders. Thus, this is an approach that
requires additional research.
Faith-Based Programs. These Christian-based
programs provide religious ministry, including
bible study, to offenders in prison and/or when
offenders re-enter the community. The faithbased offender programs that have been
evaluated to date do not significantly reduce
recidivism.13 Rigorous evaluations of faith-based
programs are still relatively rare—we found only
five thorough evaluations—and future studies may
provide evidence of better outcomes.
Domestic Violence Courts. These specialized
courts are designed to provide effective
coordinated response to domestic violence.
Domestic violence courts commonly bring
together criminal justice and social service
agencies and may mandate treatment for
offenders. The two courts included here
differed—one was exclusively for felony cases
and the other for misdemeanors. In the
misdemeanor court, recidivism was lowered, while
the felony court observed increased recidivism.
Thus, this is an area that requires additional
research.
Intensive Supervision of Sex Offenders in the
Community. The programs included in the analysis
were all developed in Illinois and varied by county.
All involve a specialized probation caseload,
frequent face-to-face meetings with offenders, and
home visits and inspections. Supervision programs
may also include treatment. The recidivism results
in the four counties vary widely, suggesting that
some of the programs may be effective while others
are not. Additional research is needed to identify
these characteristics.
Mixed Treatment of Sex Offenders. Two rigorous
studies evaluated community sex offender
treatments employed across geographic areas
(Washington State and British Columbia). In each
case, the individual treatment programs varied
widely. On average, these mixtures of treatments
significantly reduced recidivism; however, while
the treatments in Washington were significant and
large, those in British Columbia were very small
and non-significant. Controlling for the variation,
the overall effect was zero.
13

Similar findings were recently published in a review of faith-based
prison programs: J. Burnside, N. Loucks, J. R. Addler, and G. Rose
(2005). My brother’s keeper: Faith-based units in prison, Cullompton,
Devon, U.K.: Willan Publishing, p. 314.

7

Medical Treatment of Sex Offenders. Several
medical approaches to treating sex offenders
have been tried. These include castration and
two types of hormonal therapy. Ethical
considerations have made it difficult to conduct
rigorous evaluations of these types of treatment.
The single study we used in our analysis
compared men who volunteered for castration to
another group who volunteered but did not
receive the surgery. Recidivism was significantly
less among castrated offenders.
Circles of Support and Accountability (COSA/
Faith-Based Supervision of Sex Offenders). This
program originated among members of the
Mennonite church in Canada. Volunteers provide
support to sex offenders being released from
prison. Five lay volunteers visit or contact the
offender every week. The volunteers are
supported by community-based professionals,
typically psychologists, law enforcement,
correctional officers, or social service workers; the
full circle meets weekly. The single evaluation of
this program showed a significant reduction in
recidivism of 31.6 percent.
Regular Parole Supervision vs. No Parole
Supervision. The Urban Institute recently
reported the results of a study that compared the
recidivism rates of adult prisoners released from
prison with parole to those released from prison
without parole. The study used a large national
database covering 15 states. It found no
statistically significant effect of parole on
recidivism. This null result is consistent with our
results for surveillance-oriented intensive
supervision programs versus regular levels of
supervision (reported above). We would like to
see additional treatment and comparison group
tests of the parole vs. no-parole question before
drawing firm conclusions.

8

Day Fines (compared with standard probation).
We found one rigorous study of “day fines.”
These fines, which are more common in Europe
than the United States, allow judges to impose
fines that are commensurate with an offender’s
ability to pay and the seriousness of the offence.
This approach has been evaluated for low-risk
felony offenders and was used to divert these
offenders from regular parole supervision. The
approach had no effect on recidivism rates but
additional research is needed to estimate whether
this sentencing alternative is cost-beneficial.
Work Release Programs. We found only four
quality studies of work release programs. While,
on average, these programs appear to reduce
recidivism, more rigorous outcome research is
needed on this type of adult corrections program.

Technical Appendices
Appendix 1: Meta-Analysis Coding Criteria
Appendix 2: Procedures for Calculating Effect Sizes
Appendix 3: Institute Adjustments to Effect Sizes for Methodological Quality, Outcome Measure
Relevance, and Researcher Involvement
Appendix 4: Meta-Analytic Results—Estimated Effect Sizes and Citations to Studies Used in the
Analyses

dropout, and that these unobserved factors are likely
to significantly bias estimated treatment effects.
Some comparison group studies of program
completers, however, contain information on program
dropouts in addition to a comparison group. In these
situations, we included the study if sufficient
information was provided to allow us to reconstruct an
intent-to-treat group that included both completers
and non-completers, or if the demonstrated rate of
program non-completion was very small (e.g. under
10 percent). In these cases, the study still needed to
meet the other inclusion requirements listed here.

Appendix 1: Meta-Analysis Coding Criteria
A meta-analysis is only as good as the selection and coding
criteria used to conduct the study. The following are the key
choices we made and implemented for this meta-analysis of
adult corrections programs.
1.

2.

3.

4.

Study Search and Identification Procedures. We
searched for all adult corrections evaluation studies
conducted since 1970. The studies had to be written
in English. We used three primary means to identify
and locate these studies: a) we consulted the study
lists of other systematic and narrative reviews of the
adult corrections research literature—there have
been a number of recent reviews on particular topics;
b) we examined the citations in the individual studies;
and c) we conducted independent literature searches
of research databases using search engines such as
Google, Proquest, Ebsco, ERIC, and SAGE. As we
describe, the most important inclusion criteria in our
study was that an evaluation have a control or
comparison group. Therefore, after first identifying all
possible studies using these search methods, we
attempted to determine whether the study was an
outcome evaluation that had a comparison group. If
a study met these criteria, we then secured a paper
copy of the study for our review.
Peer-Reviewed and Other Studies. We examined
all program evaluation studies we could locate with
these search procedures. Many of these studies
were published in peer-reviewed academic journals,
while many others were from government reports
obtained from the agencies themselves. It is
important to include non-peer reviewed studies,
because it has been suggested that peer-reviewed
publications may be biased to show positive program
effects. Therefore, our meta-analysis included all
available studies regardless of published source.
Control and Comparison Group Studies. We only
included studies in our analysis if they had a control
or comparison group. That is, we did not include
studies with a single-group, pre-post research design.
This choice was made because we believe that it is
only through rigorous comparison group studies that
average treatment effects can be reliably estimated.
Exclusion of Studies of Program Completers
Only. We did not include a comparison study in our
meta-analytic review if the treatment group was made
up solely of program completers. We adopted this
rule, because we believe there are too many
significant unobserved self-selection factors that
distinguish a program completer from a program

5.

Random Assignment and Quasi- Experiments.
Random assignment studies were preferred for
inclusion in our review, but we also included nonrandomly assigned control groups. We only included
quasi-experimental studies if, and only if, sufficient
information was provided to demonstrate
comparability between the treatment and comparison
groups on important pre-existing conditions such as
age, gender, and prior criminal history. Of the 291
individual studies in our review, about 20 percent
were effects estimated from well implemented
random assignment studies.

6.

Enough information to Calculate an Effect Size.
Following the statistical procedures in Lipsey and
Wilson (2001), a study had to provide the necessary
information to calculate an effect size. If the necessary
information was not provided, the study was not
included in our review.

7.

Mean-Difference Effect Sizes. For this study we
coded mean-difference effect sizes following the
procedures in Lipsey and Wilson (2001). For
dichotomous crime measures, we used the arcsine
transformation to approximate the mean difference
effect size, again following Lipsey and Wilson. We
chose to use the mean-difference effect size rather
than the odds ratio effect size because we frequently
coded both dichotomous and continuous outcomes
(odds ratio effect sizes could also have been used
with appropriate transformations).

8.

Unit of Analysis. Our unit of analysis for this study
was an independent test of a treatment in a particular
site. Some studies reported outcome evaluation
information for multiple sites; we included each site
as an independent observation if a unique and
independent comparison group was also used at
each site.

9

9.

Multivariate Results Preferred. Some studies
presented two types of analyses: raw outcomes that
were not adjusted for covariates such as age, gender,
criminal history; and those that had been adjusted
with multivariate statistical methods. In these
situations, we coded the multivariate outcomes.

15. Some Special Coding Rules for Effect Sizes. Most
studies in our review had sufficient information to
code exact mean-difference effect sizes. Some
studies, however, reported some, but not all of the
information required. The rules we followed for these
situations are these:
a. Two-Tail P-Values. Some studies only reported
p-values for significance testing of program
outcomes. When we had to rely on these results,
if the study reported a one-tail p-value, we
converted it to a two-tail test.

10. Broadest Measure of Criminal Activity. Some
studies presented several types of crime-related
outcomes. For example, studies frequently measured
one or more of the following outcomes: total arrests,
total convictions, felony arrests, misdemeanor arrests,
violent arrests, and so on. In these situations, we
coded the broadest crime outcome measure. Thus,
most of the crime outcome measures that we coded in
this analysis were total arrests and total convictions.

b. Declaration of Significance by Category. Some
studies reported results of statistical significance
tests in terms of categories of p-values. Examples
include: p<=.01, p<=.05, or “non-significant at the
p=.05 level.” We calculated effect sizes for these
categories by using the highest p-value in the
category. Thus if a study reported significance at
“p<=.05,” we calculated the effect size at p=.05.
This is the most conservative strategy. If the
study simply stated a result was “non-significant,”
we computed the effect size assuming a p-value
of .50 (i.e. p=.50).

11. Averaging Effect Sizes for Arrests and
Convictions. When a study reported both total
arrests and total convictions, we calculated an effect
size for each measure then took a simple average of
the two effect sizes.
12. Dichotomous Measures Preferred Over
Continuous Measures. Some studies included two
types of measures for the same outcome: a
dichotomous (yes/no) outcome and a continuous
(mean number) measure. In these situations, we
coded an effect size for the dichotomous measure.
Our rationale for this choice is that in small or
relatively small sample studies, continuous measures
of crime outcomes can be unduly influenced by a
small number of outliers, while dichotomous
measures can avoid this problem. Of course, if a
study only presented a continuous measure, then we
coded the continuous measure.
13. Longest Follow-Up Times. When a study presented
outcomes with varying follow-up periods, we generally
coded the effect size for the longest follow-up period.
The reason for this is that our intention for this analysis
is to compute the long-run benefits and costs of
different programs. The longest follow-up period allows
us to gain the most insight into the long-run effect of
these programs on criminality. Occasionally, we did
not use the longest follow-up period if it was clear that a
longer reported follow-up period adversely affected the
attrition rate of the treatment and comparison group
samples.
14. Measures of New Criminal Activity. Whenever
possible, we excluded outcome measures that did not
report on new criminal activity. For example, we
avoided coding measure of technical violations of
probation or parole. We do not think that technical
violations are unimportant, but our purpose in this
meta-analysis is to ascertain whether these programs
affect new criminal activity.

Appendix 2: Procedures for Calculating Effect Sizes
Effect sizes measure the degree to which a program has
been shown to change an outcome for program participants
relative to a comparison group. There are several methods
used by meta-analysts to calculate effect sizes, as
described in Lipsey and Wilson (2001). In this, we use
statistical procedures to calculate the mean difference
effect sizes of programs. We did not use the odds-ratio
effect size because many of the outcomes measured in this
study are continuously measured. Thus, the mean
difference effect size was a natural choice.
Many of the outcomes we record, however, are measured
as dichotomies. For these yes/no outcomes, Lipsey and
Wilson (2001) show that the mean difference effect size
calculation can be approximated using the arcsine
transformation of the difference between proportions.14
(A1)

ESm( p ) = 2 × arcsin Pe − 2 × arcsin Pc

In this formula, ESm(p) is the estimated effect size for the
difference between proportions from the research
information; Pe is the percentage of the population that had
an outcome such as re-arrest rates for the experimental or
treatment group; and Pc is the percentage of the population
that was re-arrested for the control or comparison group.
A second effect size calculation involves continuous data
where the differences are in the means of an outcome.
When an evaluation reports this type of information, we
use the standard mean difference effect size statistic.15

14
15

10

Lipsey and Wilson, Practical meta-analysis, Table B10, formula (22).
Ibid., Table B10, formula (1).

(A2)

ESm =

Me − Mc
SDe2

+
2

Next, the inverse variance weight wm is computed for each
mean effect size with:19

SDc2

(A5)

In this formula, ESm is the estimated effect size for the
difference between means from the research information;
Me is the mean number of an outcome for the experimental
group; Mc is the mean number of an outcome for the control
group; SDe is the standard deviation of the mean number for
the experimental group; and SDc is the standard deviation of
the mean number for the control group.
Often, research studies report the mean values needed to
compute ESm in (A2), but they fail to report the standard
deviations. Sometimes, however, the research will report
information about statistical tests or confidence intervals
that can then allow the pooled standard deviation to be
estimated. These procedures are also described in
Lipsey and Wilson (2001).

Adjusting Effect Sizes for Small Sample Sizes
Since some studies have very small sample sizes, we
follow the recommendation of many meta-analysts and
adjust for this. Small sample sizes have been shown to
upwardly bias effect sizes, especially when samples are
less than 20. Following Hedges (1981),16 Lipsey and
Wilson (2001)17 report the “Hedges correction factor,” which
we use to adjust all mean difference effect sizes (N is the
total sample size of the combined treatment and
comparison groups):
(A3)

[

3 ⎤
⎡
ES′m = ⎢1 −
× ES m , or , ES m ( p )
4
N
− 9 ⎥⎦
⎣

]

(A4)

SEm =

′ )2
ne + nc
( ESm
+
ne nc
2( ne + nc )

(A6)

ES =

∑ (w ES ′
∑w
mi

mi

)

mi

Confidence intervals around this mean are then computed
by first calculating the standard error of the mean with:21
(A7)

SEES =

1
∑ wmi

Next, the lower, ESL, and upper limits, ESU, of the
confidence interval are computed with:22
(A8)

ES L = ES − z(1−α ) ( SE ES )

(A9)

ESU = ES + z(1−α ) ( SE ES )

In equations (A8) and (A9), z(1-α) is the critical value for the
z-distribution (1.96 for α = .05).
The test for homogeneity, which provides a measure of
the dispersion of the effect sizes around their mean, is
given by:23
Qi = (∑ wi ESi2 ) −

(∑ wi ESi ) 2

∑ wi

The Q-test is distributed as a chi-square with k-1 degrees of
freedom (where k is the number of effect sizes).
Computing Random Effects Weighted Average Effect
Sizes and Confidence Intervals
When the p-value on the Q-test indicates significance at
values of p less than or equal to .05, a random effects model
is performed to calculate the weighted average effect size.
This is accomplished by first calculating the random effects
variance component, v.24
(A11)

In equation (A4), ne and nc are the number of participants
in the experimental and control groups and ES'm is from
equation (A3).

SEm2

The weighted mean effect size for a group of studies in
program area i is then computed with:20

(A10)
Computing Weighted Average Effect Sizes, Confidence
Intervals, and Homogeneity Tests
Once effect sizes are calculated for each program effect,
the individual measures are summed to produce a weighted
average effect size for a program area. We calculate the
inverse variance weight for each program effect, and these
weights are used to compute the average. These
calculations involve three steps. First, the standard error,
SEm of each mean effect size is computed with:18

1

wm =

v=

Qi − (k − 1)

∑ wi − (∑ wsqi

∑ wi )

This random variance factor is then added to the variance
of each effect size and then all inverse variance weights
are recomputed, as are the other meta-analytic test
statistics.

19

Ibid., 49, equation 3.24.
Ibid., 114.
21
Ibid., 114.
22
Ibid., 114.
23
Ibid., 116.
24
Ibid., 134.
20

16

L. V. Hedges (1981). Distribution theory for Glass’s estimator of effect
size and related estimators. Journal of Educational Statistics, 6: 107-128.
Lipsey and Wilson, Practical meta-analysis, 49, formula 3.22.
18
Ibid., 49, equation 3.23.
17

11

Appendix 3: Institute Adjustments to Effect Sizes
for Methodological Quality, Outcome Measure
Relevance, and Researcher Involvement

values for pre-existing characteristics for the program
and control groups.
•

A “4” is assigned to a study that employs a rigorous
quasi-experimental research design with a program and
matched comparison group, controlling with statistical
methods for self-selection bias that might otherwise
influence outcomes. These quasi-experimental methods
may include estimates made with a convincing
instrumental variables modeling approach, or a Heckman
approach to modeling self-selection.26 A level 4 study
may also be used to “downgrade” an experimental
random assignment design that had problems in
implementation, perhaps with significant attrition rates.

•

A “3” indicates a non-experimental evaluation where
the program and comparison groups were reasonably
well matched on pre-existing differences in key
variables. There must be evidence presented in the
evaluation that indicates few, if any, significant
differences were observed in these salient preexisting variables. Alternatively, if an evaluation
employs sound multivariate statistical techniques
(e.g. logistic regression) to control for pre-existing
differences, and if the analysis is successfully
completed, then a study with some differences in preexisting variables can qualify as a level 3.

•

A “2” involves a study with a program and matched
comparison group where the two groups lack
comparability on pre-existing variables and no
attempt was made to control for these differences in
the study.

•

A “1” involves a study where no comparison group is
utilized. Instead, the relationship between a program
and an outcome, i.e., recidivism, is analyzed before and
after the program.

In Exhibit 2 we show the results of our meta-analyses
calculated with the standard meta-analytic formulas
described in Appendix 2. In the last column in Exhibit 2,
however, we list “Adjusted Effect Sizes” that we actually
use in our benefit-cost analysis of each of the programs we
review. These adjusted effect sizes, which are derived from
the unadjusted results, are always smaller than or equal to
the unadjusted effect sizes we report in the other columns
in Exhibit 2.
In Appendix 3, we describe our rationale for making these
downward adjustments. In particular, we make three types of
adjustments that we believe are necessary to better estimate
the results that we think each program is likely to actually
achieve in real-world settings. We make adjustments for: a)
the methodological quality of each of the studies we include
in the meta-analyses; b) the relevance or quality of the
outcome measure that individual studies use; and c) the
degree to which the researcher(s) who conducted a study
were invested in the program’s design and implementation.
3a. Methodological Quality. Not all research is of equal
quality, and this, we believe, greatly influences the
confidence that can be placed in the results from a study.
Some studies are well designed and implemented, and the
results can be viewed as accurate representations of
whether the program itself worked. Other studies are not
designed as well and less confidence can be placed in any
reported differences. In particular, studies of inferior
research design cannot completely control for sample
selection bias or other unobserved threats to the validity of
reported research results. This does not mean that results
from these studies are of no value, but it does mean that
less confidence can be placed in any cause-and-effect
conclusions drawn from the results.
To account for the differences in the quality of research
designs, we use a 5-point scale as a way to adjust the
reported results. The scale is based closely on the 5-point
scale developed by researchers at the University of
Maryland.25 On this 5-point scale, a rating of “5” reflects an
evaluation in which the most confidence can be placed. As
the evaluation ranking gets lower, less confidence can be
placed in any reported differences (or lack of differences)
between the program and comparison or control groups.
On the 5-point scale, as interpreted by the Institute, each
study is rated with the following numerical ratings.
•

25

A “5” is assigned to an evaluation with wellimplemented random assignment of subjects to a
treatment group and a control group that does not
receive the treatment/program. A good random
assignment study should also indicate how well the
random assignment actually occurred by reporting

L. W. Sherman, D. Gottfredson, D. MacKenzie, J. Eck, P. Reuter, and
S. Bushway (1998). Preventing crime: What works, what doesn't, what's
promising. Prepared for the National Institute of Justice. Department of
Criminology and Criminal Justice, University of Maryland. Chapter 2.

12

We do not use the results from program evaluations rated as
a “1” on this scale, because they do not include a comparison
group and we believe that there is no context to judge
program effectiveness. We also regard evaluations with a
rating of “2” as highly problematic and, as a result, we do not
consider their findings in the calculations of effect. In this
study, we only consider evaluations that rate at least a 3 on
this 5-point scale.
An explicit adjustment factor is assigned to the results of
individual effect sizes based on the Institute’s judgment
concerning research design quality. We believe this
adjustment is critical and is the only practical way to
combine the results of a high quality study (i.e., a level 5
study) with those of lesser design quality. The specific
adjustments made for these studies depend on the topic
area being considered. In some areas, such as criminal
justice program evaluations, there is strong evidence that
less-than-random assignment studies (i.e., less than level 5
studies) have, on average, smaller effect

26

For a discussion of these methods, see W. Rhodes, B. Pelissier, G.
Gaes, W. Saylor, S. Camp, and S. Wallace (2001). Alternative solutions to
the problem of selection bias in an analysis of federal residential drug
treatment programs. Evaluation Review, 25(3): 331-369.

sizes than weaker-designed studies.27 Thus, for the typical
criminal justice evaluation, we use the following “default”
adjustments to account for studies of different research
design quality:
•

A level 5 study carries a factor of 1.0 (that is, there is
no discounting of the study’s evaluation outcomes).

•

A level 4 study carries a factor of .75 (effect sizes
discounted by 25 percent).

•

A level 3 study carries a factor of .50 (effect sizes
discounted by 50 percent).

•

We do not include level 2 and level 1 studies in our
analyses.

These factors are subjective to a degree; they are based
on the Institute’s general impressions of the confidence
that can be placed in the predictive power of criminal
justice studies of different quality.
The effect of the adjustment is to multiply the effect size
for any study, ES'm, in equation (A3) by the appropriate
research design factor. For example, if a study has an
effect size of -.20 and it is deemed a level 4 study, then
the -.20 effect size would be multiplied by .75 to produce
a -.15 adjusted effect size for use in the benefit-cost
analysis.
3b. Adjusting Effect Sizes for Relevance or Quality of the
Outcome Measure. As noted in Appendix 1, our focus in
this analysis is whether adult corrections programs reduce
new criminal activity. We prefer measures such as arrests or
convictions and avoid measures such as technical violations
of parole or probation, since these may or may not be related
to the commission of new crimes. In addition, we require that
all studies have at least a six-month follow up period. For
those studies that had a follow-up period of under 12 months,
but greater than six months, and for those studies that only
reported weak measures of new criminal activity, we reduced
effects sizes by 25 percent. This adjustment multiplies the
effect size for any study with a short follow-up or weak
measure by .75.

3c. Adjusting Effect Sizes for Research Involvement in
the Program’s Design and Implementation. The purpose
of the Institute’s work is to identify and evaluate programs
that can make cost-beneficial improvements to Washington’s
actual service delivery system. There is some evidence that
programs that are closely controlled by researchers or
program developers have better results than those that
operate in “real world” administrative structures.28 In our own
evaluation of a real-world implementation of a researchbased juvenile justice program in Washington, we found that
the actual results were considerably lower than the results
obtained when the intervention was conducted by the
originators of the program.29 Therefore, we make an
adjustment to effect sizes ESm to reflect this distinction. As a
parameter for all studies deemed not to be “real world” trials,
the Institute discounts ES'm by .5, although this can be
modified on a study-by-study basis.

Appendix 4: Meta-Analytic Results—Estimated
Effect Sizes and Citations to Studies Used in the
Analyses
Exhibit 2 provides technical meta-analytic results for the
effect sizes computed for these groupings of programs,
including the results of the adjustments described above.
Exhibit 3 lists the citations for all the studies used in the
meta-analyses, arranged by program area.

28

27

M. W. Lipsey (2003). Those confounded moderators in meta-analysis:
Good, bad, and ugly. The Annals of the American Academy of Political
and Social Science, 587(1): 69-81. Lipsey found that, for juvenile
delinquency evaluations, random assignment studies produced effect
sizes only 56 percent as large as nonrandom assignment studies.

Ibid. Lipsey found that, for juvenile delinquency evaluations, programs
in routine practice (i.e., “real world” programs) produced effect sizes only
61 percent as large as research/demonstration projects. See also: A.
Petrosino, & H. Soydan (2005). The impact of program developers as
evaluators on criminal recidivism: Results from meta-analyses of
experimental and quasi-experimental research. Journal of Experimental
Criminology, 1(4): 435-450.
29
R. Barnoski (2004). Outcome evaluation of Washington State's
research-based programs for juvenile offenders. Olympia: Washington
State Institute for Public Policy, available at
<http://www.wsipp.wa.gov/rptfiles/04-01-1201.pdf>.

13

Exhibit 2
Estimated Effect Sizes on Crime Outcomes
(A Negative Effect Size Indicates the Program Achieves Less Crime)
Program listed in italics require, in our judgment, additional research fore it can
be concluded that they do or do not reduce recidivism.

Number of
Studies
Included in the
Review (total
number of
subjects in the
treatment
groups in the
studies in
parenthses)

Meta-Analytic Results Before Applying
Institute Adjustments
Fixed Effects Model
Weighted Mean
Effect Size

Homogeneity
Test

Random Effects
Model
Weighted Mean
Effect Size

Adjusted Effect Size
Used in the BenefitCost Analysis
(estmated effect after
downward adjustments
for the methodological
qualtity of the evidence,
outcome measurement
relevance, and
researcher involvement)

ES

p-value

p-value

ES

p-value

ES

56 (18957)
6 (1989)
7 (1582)
8 (3788)
12 (2572)
5 (54334)
9 (1436)

-.160
-.152
-.119
-.130
-.046
-.137
-.110

.000
.000
.001
.000
.114
.000
.008

.000
.735
.079
.905
.000
.000
.025

-.183
na
na
na
-.039
-.221
-.106

.000
na
na
na
.480
.007
.094

-.094
-.077
-.059
-.077
.000
-.109
-.052

11 (1243)
2 (145)

.060
-.361

.141
.004

.682
.542

na
na

na
na

.000
-.230

25 (6546)
5 (630)

-.147
-.015

.000
.767

.000
.043

-.164
-.028

.000
.728

-.081
.000

9 (1254)
2 (327)

-.025
-.086

.523
.309

.120
.009

na
-.013

na
.956

.000
.000

3 (313)
5 (894)
6 (359)
4 (705)
5 (262)
4 (392)
2 (130)
2 (724)
1 (60)
1 (99)

.134
-.144
-.391
-.119
-.357
.207
-.190
-.176
-.388
-.372

.179
.005
.000
.027
.001
.003
.126
.001
.035
.060

.038
.173
.438
.080
.846
.000
.635
.015
na
na

.027
na
na
na
na
.202
na
-.184
na
na

.892
na
na
na
na
.359
na
.169
na
na

.000
-.087
-.195
-.069
-.177
.000
.000
.000
-.193
-.185

24 (2699)
10 (2156)
1 (22016)
1 (191)
22 (5910)
12 (2175)
6 (783)

-.033
-.287
-.010
-.084
-.030
.025
-.077

.244
.000
.591
.411
.103
.411
.130

.146
.000
na
na
.000
.025
.013

na
-.291
na
na
-.017
.015
-.125

na
.041
na
na
.632
.765
.165

.000
-.190
.000
.000
.000
.000
.000

4 (7178)
7 (2399)
16 (9217)
4 (621)
3 (1950)

-.119
-.094
-.047
-.122
-.189

.000
.001
.003
.045
.000

.174
.006
.017
.285
.868

na
-.114
-.061
na
na

na
.034
.021
na
na

-.077
-.050
-.047
-.055
-.124

Adult Offenders
Programs for Drug-Involved Offenders
Adult drug courts
In-prison therapeutic communities with community aftercare
In-prison therapeutic communities without community aftercare
Cognitive-behavioral therapy in prison
Case management in the community
Drug treatment in the community
Drug treatment in jail

Programs for Mentally Ill and Co-Occurring Offenders
Jail diversion (pre & post booking programs)
Therapeutic community programs

Treatment Programs for General Offenders
Cognitive-behavioral for the general population
Faith-based programs

Programs for Domestic Violence Offenders
Education/cognitive-behavioral treatment
Domestic violence courts

Programs for Sex Offenders
Psychotherapy, sex offenders
Cognitive-behavioral treatment in prison
Cognitive-behavioral treatment in the community
Cognitive-behavioral treatment in prison (sex offense outcomes)
Cognitive-behavioral treatment in the community (sex off. outcomes)
Intensive supervision of sex offenders in the communty
Behavioral Therapy - Sex Offenders.
Mixed Treatment-Sex Offenders in the Community
Circles of Support & Accountability (Faith-based supervision of sex offenders)
Medical Treatment of Sex Offenders

Intermediate Sanctions
Intensive supervision: surveillance-oriented approaches
Intensive supervision: treatment-oriented approaches
Regular supervision compared to no supervision
Day fines (compared to standard probation)
Adult boot camps
Electronic monitoring
Restorative justice programs for lower risk adult offenders

Work and Education Programs for General Offenders
Correctional industries programs in prison
Basic adult education programs in prison
Employment training & job assistance programs in the community
Work release programs from prison
Vocatonal education in prison

Notes to the Table:
Appendices 1, 2, and 3 describe the meta-analytic methods and decision criteria used to produce these estimates. Briefly, to be included in this review: 1) a study had to be published
in English between 1970 and 2005; 2) the study could be published in any format—peer-reviewed journals, government reports, or other unpublished results; 3) the study had to have
a randomly-assigned or demonstrably well-matched comparison group; 4) the study had to have intent-to-treat groups that included both completers and program dropouts, or
sufficient information that the combined effects could be tallied; 5) the study had to provide sufficient information to code effect sizes; and 6) the study had to have at least a six-month
follow-up period and include a measure of criminal recidivism as an outcome.

14

Exhibit 3

Citations to the Studies Used in the Meta-Analyses
(Some studies contributed independent effect sizes from more than one location)

Program Grouping Study
Adult Boot Camps

Adult Drug Courts

Austin, J., Jones, M., & Bolyard, M. (1993). Assessing the impact of a county operated boot camp: Evaluation of the Los Angeles County
regimented inmate diversion program. San Francisco: National Council on Crime and Delinquency.
Burns, J. C., & Vito, G. F. (1995). An impact analysis of the Alabama boot camp program. Federal Probation, 59(1): 63-67.
Camp, D. A., & Sandhu, H. S. (1995). Evaluation of female offender regimented treatment program (FORT). Journal of the Oklahoma
Criminal Justice Research Consortium, 2: 50-77.
Colorado Department of Corrections. (1993). Colorado regimented inmate training program: A legislative report.
Farrington, D. P., Ditchfield, J., Hancock, G., Howard, P., Jolliffe, D., Livingston, M. S., & Painter, K. (2002). Evaluation of two intensive
regimes for young offenders. Home Office Research Study 239. London, UK: Home Office
Gransky, L. A. & Jones, R. J. (1995). Evaluation of the post-release status of substance abuse program participants: The impact
incarceration program at Dixon Springs and the Gateway substance abuse program at Dwight Correctional Center. Chicago: Illinois
Criminal Justice Authority Report.
Harer, M. D., & Klein-Saffran, J. (1996). Lewisburg ICC evaluation. Washington DC: Bureau of Prisons, Office of Research and
Evaluation, memo.
Jones, M., & Ross, D. L. (1997). Is less better? Boot camp, regular probation and rearrest in North Carolina. American Journal of
Criminal Justice, 21(2): 147-161.
Kempinen, C. A., & Kurlychek, M. C. (2003). An outcome evaluation of Pennsylvania's boot camp: Does rehabilitative programming
within a disciplinary setting reduce recidivism? Crime and Delinquency, 49(4): 581:602.
MacKenzie, D. L. & Souryal, C. (1994). Multisite evaluation of shock incarceration: Executive summary. Washington, DC: U.S.
Department of Justice/NIJ.
Smith, R. P. (1998). Evaluation of the work ethic camp. Olympia: Washington State Department of Corrections.
Stinchcomb, J. B., & Terry, W. C. (2001). Predicting the likelihood of rearrest among shock incarceration graduates: Moving beyond
another nail in the boot camp coffin. Crime and Delinquency, 47(2): 221-242.
Wright, D. T., & Mays, G. L. (1998). Correctional boot camps, attitudes, and recidivism: The Oklahoma experience. Journal of Offender
Rehabilitation, 28(1/2): 71-87.
Barnoski, R., & Aos, S., (2003). Washington State's drug courts for adult defendants: Outcome evaluation and cost-benefit analysis
(Document No. 03-03-1201). Olympia: Washington State Institute for Public Policy.
Bavon, A. (2001). The effect of the Tarrant County drug court project on recidivism. Evaluation and Program Planning, 24: 13–24.
Bell, M. M. (1998). King County drug court evaluation: Final report. Seattle, WA: M. M. Bell, Inc.
Breckenridge, J. F., Winfree, Jr., L. T., Maupin, J. R., & Clason, D. L. (2000). Drunk drivers, DWI ‘drug court’ treatment, and recidivism:
Who fails? Justice Research and Policy, 2(1): 87-105.
Brewster, M. P. (2001). An evaluation of the Chester County (PA) drug court program. Journal of Drug Issues, 31(1): 177-206.
Carey, S. M., & Finigan, M. W. (2004). A detailed cost-analysis in a mature drug court setting: A cost-benefit evaluation of the Multnomah
County drug court. Journal of Contemporary Criminal Justice, 20(3): 315-338.
Craddock, A. (2002). North Carolina drug treatment court evaluation: Final report. Raleigh: North Carolina Court System.
Crumpton, D., Brekhus, J., Weller, J., & Finigan, M. (2003). Cost analysis of Baltimore City, Maryland drug treatment court. Portland, OR:
NPC Research, Inc.
Deschenes, E. P., Cresswell, L., Emami, V., Moreno, K., Klein, Z., & Condon, C. (2001). Success of drug courts: Process and outcome
evaluations in Orange County, California, final report. Submitted to the Superior Court of Orange County, CA.
Ericson, R., Welter, S., & Johnson, T. L. (1999). Evaluation of the Hennepin County drug court. Minneapolis: Minnesota Citizens Council
on Crime and Justice.
Spokane County Drug Court. (1999). Evaluation: Spokane County drug court program. Spokane, WA: Spokane County Drug Court.
Fielding, J. E., Tye, G., Ogawa, P. L., Imam, I. J., & Long, A. M. (2002). Los Angeles County drug court programs: Initial results. Journal
of Substance Abuse Treatment, 23(3): 217-224.
Finigan, M. W. (1998). An outcome program evaluation of the Multnomah County S.T.O.P. drug diversion program. Portland, OR: NPC
Research, Inc.
Godley, M. D., Dennis, M. L., Funk, R., Siekmann, M., & Weisheit, R. (1998). An evaluation of the Madison County assessment and
treatment alternative court. Chicago: Illinois Criminal Justice Information Authority.
Goldkamp, J. S. & Weiland, D. (1993). Assessing the impact of Dade County's felony drug court. Final report. Philadelphia: Crime and
Justice Research Institute.
Goldkamp, J. S., Weiland, D., & Moore, J. (2001). The Philadelphia treatment court, its development and impact: The second phase
(1998-2000). Philadelphia: Crime and Justice Research Institute.
Goldkamp, J. S., White, M. D., & Robinson, J. B. (2001). Do drug courts work? Getting inside the drug court black box. Journal of Drug
Issues, 31(1): 27-72.
Gottfredson, D. C., Najaka, S. S., & Kearley, B. (2002 November). A randomized study of the Baltimore City drug treatment court:
Results from the three-year follow-up. Paper presented at the annual meeting of the American Society of Criminology, Chicago.
Gottfredson, D. C., Coblentz, K., & Harmon, M. A. (1997). A short-term outcome evaluation of the Baltimore City drug treatment court
program. Perspectives, Winter: 33–38.
Granfield, R., Eby, C., & Brewster, T. (1998). An examination of the Denver drug court: The impact of a treatment-oriented drug-offender
system. Law & Policy, 20: 183-202.
Harrell, A., Roman, J., & Sack, E. (2001). Drug court services for female offenders, 1996–1999: Evaluation of the Brooklyn treatment
court. Washington, DC: Urban Institute.
Johnson, G. D., Formichella, C. M., & Bowers D. J. (1998). Do drug courts work? An outcome evaluation of a promising program. Journal
of Applied Sociology, 15(1): 44-62.
Latessa, E. J., Shaffer, D. K., & Lowenkamp C. (2002). Outcome evaluation of Ohio’s drug court efforts: Final report. Cincinnati: Center
for Criminal Justice Research, Division of Criminal Justice, University of Cincinnati.
Listwan, S. J., & Latessa, E. J. (2003). The Kootenai and Ada County drug courts: Outcome evaluation findings, final report. Cincinnati:
Center for Criminal Justice Research, University of Cincinnati.
Listwan, S. J., Shaffer, D. K., & Latessa, E. J. (2001). The Akron municipal drug court: Outcome evaluation findings. Cincinnati: Center
for Criminal Justice Research, University of Cincinnati.
Listwan, S. J., Sundt, J. L., Holsinger, A. M., & Latessa, E. J. (2003). The effect of drug court programming on recidivism: The Cincinnati
experience. Crime and Delinquency, 49(3): 389-411.
Listwan. S. J., Shaffer, D. K., & Latessa, E. J. (2001). The Erie County drug court: Outcome evaluation findings. Cincinnati: Center for
Criminal Justice Research, University of Cincinnati.
Logan, T., Hoyt, W., & Leukefeld, C. (2001). Kentucky drug court outcome evaluation: Behaviors, costs, and avoided costs to society.
Lexington: Center on Drug and Alcohol Research, University of Kentucky.

15

Program Grouping

Study

Adult Drug Courts, continued

Martin, T. J., Spohn, C. C., Piper, R. K., & Frenzel-Davis, E. (2001). Phase III Douglas County drug court evaluation: Final report.
Washington, DC: Institute for Social and Economic Development.
Martinez, A. I., & Eisenberg, M. (2003). Initial process and outcome evaluation of drug courts in Texas. Austin: Criminal Justice Policy
Council.
McNeece, C. A. & Byers, J. B. (1995). Hillsborough County drug court: Two-year (1995) follow-up study. Tallahassee: Institute for
Health and Human Services Research, School of Social Work, Florida State University.
Miethe, T. D., Lu, H., & Reese, E. (2000). Reintegrative shaming and recidivism risks in drug court: Explanations for some unexpected
findings. Crime and Delinquency, 46(4): 522-541.
Peters, R. H. & Murrin, M. R. (2000). Effectiveness of treatment-based drug courts in reducing criminal recidivism. Criminal Justice and
Behavior, 27(1): 72-96.
Rempel, M., Fox-Kralstein, D., Cissner, A., Cohen, R., Labriola, M., Farole, D., Bader, A., & Magnani, M. (2003). The New York State
adult drug court evaluation: Policies, participants and impacts. New York, NY: Center for Court Innovation.
Shanahan, M., Lancsar, E., Haas, M., Lind, B., Weatherburn, D., & Chen, S. (2004). Cost-effectiveness analysis of the New South
Wales adult drug court program. Evaluation Review, 28(1): 3-27.
Spohn, C., Piper, R. K., Martin, T., & Frenzel, E. D. (2001). Drug courts and recidivism: The results of an evaluation using two
comparison groups and multiple indicators of recidivism. Journal of Drug Issues, 31(1): 149-176.
Stageberg, P., Wilson, B., & Moore, R. G. (2001). Final report on the Polk County adult drug court. Iowa Department of Human Rights,
Division of Criminal and Juvenile Justice Planning.
Tjaden, C. D., Diana, A., Feldman, D., Dietrich, W., & Jackson, K. (2002). Denver drug court: Second year report, outcome evaluation.
Vail, CO: Toucan Research and Computer Solutions.
Truitt, L., Rhodes, W. M., Seeherman, A. M., Carrigan, K., & Finn, P. (2000). Phase I: Case studies and impact evaluations of
Escambia County, Florida and Jackson County, Missouri drug courts. Cambridge, MA: Abt Associates. Some results also reported in
Belenko, S. (2001). Research on drug courts: A critical review, 2001 update. New York: The National Center on Addiction and
Substance Abuse at Columbia University.
Turner, S., Greenwood, P., Fain, T., & Deschenes, E. (1999). Perceptions of drug court: How offenders view ease of program
completion, strengths and weaknesses, and the impact on their lives. National Drug Court Institute Review, 2(1): 61-86.
Utah Substance Abuse and Anti-Violence Coordinating Council. (2001). Salt Lake County drug court outcome evaluation. Salt Lake
City: Utah Substance Abuse and Anti-Violence Coordinating Council.
Vito, G. F., & Tewksbury, R. A. (1998). The impact of treatment: The Jefferson County (Kentucky) drug court program. Federal
Probation, 62(2): 46–51.
Wolfe E., Guydish J., & Termondt J. (2002). A drug court outcome evaluation comparing arrests in a two year follow-up period. Journal
of Drug Issues, 32(4): 1155-1171.
Drake, E. (2006). Correctional education and its impacts on post-prison employment patterns and recidivism. Draft report. Olympia:
Washington State Institute for Public Policy and Washington State Department of Corrections.
Harer, M. D. (1995). Prison education program participation and recidivism: A test of the normalization hypotheses. Washington, DC:
Federal Bureau of Prisons, Office of Research and Evaluation.
Mitchell, O. (2002). Statistical analysis of the three state CEA data. University of Maryland. Unpublished study.
Piehl, A. M. (1994). Learning while doing time. Kennedy School Working Paper #R94-25. Cambridge, MA: John F. Kennedy School of
Government, Harvard University.
Walsh, A. (1985). An evaluation of the effects of adult basic education on rearrest rates among probationers. Journal of Offender
Counseling, Services, and Rehabilitation, 9(4): 69-76.
Rice, M. E., Quinsey, V. L., Harris, G. T. (1991). Sexual recidivism among child molesters released from a maximum security
psychiatric institution. Journal of Consulting and Clinical Psychology, 59: 381-386.
Davidson, P. R. (1984 March). Behavioral treatment for incarcerated sex offenders: Post-release outcome. Paper presented at 1984
Conference on Sex Offender Assessment and Treatment, Kingston, Ontario, Canada.
Anglin, M. D., Longshore, D., & Turner, S. (1999). Treatment alternatives to street crime: An evaluation of five programs. Criminal
Justice and Behavior, 26(2): 168-195.

Basic Adult Education Programs
in Prison

Behavioral Treatment for Sex
Offenders

Case Management in the
Community for Drug Involved
Offenders

Circles of Support and
Accountability (faith-based
supervision of sex offenders)
Cognitive-Behavioral Therapy for
General Population

California Department of Corrections. (1996). Parolee partnership program: A parole outcome evaluation. Sacramento: California
Department of Corrections.
Hanlon, T. E., Nurco, D. N., Bateman, R. W., & O'Grady, K. E. (1999). The relative effects of three approaches to the parole
supervision of narcotic addicts and cocaine abusers. The Prison Journal, 79(2): 163-181.
Longshore, D., Turner, S., & Fain. T. (2005) Effects of case management on parolee misconduct. Criminal Justice and Behavior, 32(2):
205-222.
Owens, S., Klebe, K., Arens, S., Durham, R., Hughes, J., Moor, C., O'Keefe, M., Phillips, J., Sarno, J., & Stommel, J. (1997). The
effectiveness of Colorado's TASC programs. Journal of Offender Rehabilitation, 26: 161-176.
Rhodes, W., & Gross, M. (1997). Case management reduces drug use and criminality among drug-involved arrestees: An experimental
study of an HIV prevention intervention. Final report to the National Institute of Justice/National Institute on Drug Abuse. Cambridge,
MA: Abt Associates Inc.
Wilson, R. J., Picheca, J. E., & Prinzo, M. (2005). Circles of support & accountability: An evaluation of the pilot project in South Central
Ontario. Draft report to Correctional Service of Canada, R-168, e-mailed to M. Miller Oct 20, 2005.
Armstrong, T. (2003). The effect of moral reconation therapy on the recidivism of youthful offenders. Criminal Justice and Behavior,
30(6): 668-687.
Burnett, W. (1997). Treating post-incarcerated offenders with moral reconation therapy: A one-year recidivism study. Cognitive
Behavioral Treatment Review, 6(3/4): 2.
Culver, H. E. (1993). Intentional skill development as an intervention tool. (Doctoral dissertation. University of Massachusetts, 1993,
UMI No. 9329590).
Falshaw, L., Friendship, C., Travers, R., & Nugent, F. (2004). Searching for ‘what works': HM Prison Service accredited cognitive skills
programmes. British Journal of Forensic Practice, 6(2): 3-13.
Friendship, C., Blud, L., Erikson, M., Travers, R., Thornton, D. (2003). Cognitive-behavioural treatment for imprisoned offenders: An
evaluation of HM Prison Service's cognitive skills programmes. Legal and Criminological Psychology, 8: 103-114.
Golden, L. (2002). Evaluation of the efficacy of a cognitive behavioral program for offenders on probation: Thinking for a change.
Dallas: University of Texas Southwestern Medical Center at Dallas. Retrieved on December 22, 2005 from
http://www.nicic.org/pubs/2002/018190.pdf.
Grandberry, G. (1998). Moral reconation therapy evaluation, final report. Olympia: Washington State Department of Corrections.
Henning, K. R., & Frueh, B. C. (1996). Cognitive-behavioral treatment of incarcerated offenders: An evaluation of the Vermont
Department of Corrections' cognitive self-change program. Criminal Justice and Behavior, 23(4): 523-541.

16

Program Grouping

Study

Cognitive-Behavioral Therapy for
General Population, continued

Hubbard, D. J., & Latessa, E. J. (2004). Evaluation of cognitive-behavioral programs for offenders: A look at outcome and responsivity
in five treatment programs, final report. Cincinnati: Division of Criminal Justice, University of Cincinnati.
Johnson, G. & Hunter, R. M. (1995). Evaluation of the specialized drug offender program. In R. R Ross & R. D. Ross (Eds.), Thinking
straight: The reasoning and rehabilitation program for delinquency prevention and offender rehabilitation (pp. 214-234). Ottawa,
Canada: Air Training and Publications.
Larson, K. A. (1989). Problem-solving training and parole adjustment in high-risk young adult offenders. The Yearbook of Correctional
Education (1989):279-299.
Little, G. L., Robinson, K. D., & Burnette, K. D. (1993). Cognitive behavioral treatment of felony drug offenders: A five-year recidivism
report. Psychological Reports, 73: 1089-1090.
Little, G. L., Robinson, K. D., & Burnette, K. D. (1993). 5-year recidivism results on MRT-treated DWI offenders released. Cognitive
Behavioral Treatment Review, 2(4): 2.
Little, G. L., Robinson, K. D., Burnette, K. D., & Swan, E. S. (1998). Nine-year reincarceration study on MRT-treated felony offenders:
Treated offenders show significantly lower reincarceration. Cognitive Behavioral Treatment Review, 7(1): 2-3.
Ortmann, R. (2000). The effectiveness of a social therapy in prison: A randomized design. Crime and Delinquency, 46(2): 214-232.
Porporino, F. J. & Robinson, D. (1995). An evaluation of the reasoning and rehabilitation program with Canadian federal offenders. In
R. R Ross & R. D. Ross (Eds.), Thinking straight: The reasoning and rehabilitation program for delinquency prevention and offender
rehabilitation (pp. 214-234). Ottawa: Air Training and Publications.
Raynor, P. & Vanstone, M. (1996). Reasoning and rehabilitation in Britain: The results of the straight thinking on probation (STOP)
programme. International Journal of Offender Therapy and Comparative Criminology, 40(4): 272-284.
Robinson, D. (1995). The impact of cognitive skills training on post-release recidivism among Canadian federal offenders. Ottawa,
Ontario: Correctional Research and Development, Correctional Service Canada.
Ross, R. R., Fabiano, E. A., & Ewles, C. D. (1988). Reasoning and rehabilitation. International Journal of Offender Therapy and
Comparative Criminology, 32: 29-36.
Van Voorhis, P., Spruance, L. M., Ritchey, P. N., Listwan, S. J., & Seabrook, R. (2004). The Georgia cognitive skills experiment: A
replication of reasoning and rehabilitation. Criminal Justice and Behavior, 31(3): 282-305.
Van Voorhis, P., Spruance, L. M., Ritchey, P. N, Johnson-Listwan, S., Seabrook, R., & Pealer, J. (2002). The Georgia cognitive skills
experiment outcome evaluation phase II. Cincinnati, OH: University of Cincinnati, Center for Criminal Justice Research. Retrieved
December 22, 2005, from http://www.uc.ed/criminaljustice/ProjectReports/Georgia_Phase_II_final.report.pdf

Cognitive-Behavioral Therapy in
Prison for Drug Involved
Offenders

Wilkinson, J. (2005). Evaluating evidence for the effectiveness of the reasoning and rehabilitation programme. The Howard Journal of
Criminal Justice, 44(1): 70-85.
Yessine, A. K., & Kroner, D. G. (2004). Altering antisocial attitudes among federal male offenders on release: A preliminary analysis of
the counter-point community program (Research Report No. R-152). Ottawa, Ontario: Correctional Research and Development,
Correctional Service Canada.
Aos, S., Phipps, P., Barnoski, R. (2004). Washington's drug offender sentencing alternative: An evaluation of benefits and costs.
Olympia: Washington State Institute for Public Policy.
Daley M., Love C. T., Shepard D. S., Petersen C. B., White K. L., & Hall, F. B. (2004). Cost-effectiveness of Connecticut's in-prison
substance abuse treatment. Journal of Offender Rehabilitation, 39(3): 69-92.
Hall, E. A., Prendergast, M. L., Wellisch, J., Patten, M., & Cao, Y. (2004). Treating drug-abusing women prisoners: An outcomes
evaluation of the forever free program. The Prison Journal, 84(1): 81-105.
Hanson, G. (2000). Pine Lodge intensive inpatient treatment program. Olympia: Washington State Department of Corrections.

Cognitive-Behavioral Treatment
in Prison for Sex Offenders

Cognitive-Behavioral Treatment
in the Community for Sex
Offenders

Correctional Industries Programs
in Prison

Domestic Violence Courts

Pelissier, B., Rhodes, W., Saylor, W., Gaes, G., Camp, S. D., Vanyur, S. D., & Wallace, S. (2000). TRIAD drug treatment evaluation
project: Final report of three-year outcomes, Part 1. Washington, DC: Federal Bureau of Prisons, Office of Research and Evaluation.
Retrieved December 28, 2005, from http://www.bop.gov/news/PDFs/TRIAD/TRIAD_pref.pdf
Porporino, F. J., Robinson, D., Millson, B., & Weekes, J. R. (2002). An outcome evaluation of prison-based treatment programming for
substance users. Substance Use & Misuse, 37(8-10): 1047-1077.
Washington State Department of Corrections. (1998). Substance abuse treatment program evaluation of outcomes and management
report. Olympia: Washington State Department of Corrections, Division of Management and Budget, Planning and Research Section
Wexler, H. K., Falkin, G. P., Lipton, D. S., & Rosenblum, A. B. (1992). Outcome evaluation of a prison therapeutic community for
substance abuse treatment. In C. G. Leukefeld & F. M. Tims (Eds.), Drug abuse treatment in prisons and jails. NIDA research
Monograph 118, Rockville, MD: NIDA (pp. 156-174).
Bakker, L., Hudson, S. Wales, D. & Riley, D. (1999). …And there was light: An evaluation of the Kia Marama treatment programme for
New Zealand sex offenders against children. Unpublished report.
Looman, J., Abracen, J., & Nicholaichuk, T. P. (2000). Recidivism among treated sexual offenders and matched controls: Data from the
Regional Treatment Centre (Ontario). Journal of Interpersonal Violence, 15(3): 279-290.
Marques, J. K. (1999). How to answer the question, does sex offender treatment work? Journal of Interpersonal Violence, 14(4): 437451.
Robinson, D. (1995). The impact of cognitive skills training on post-release recidivism among Canadian federal offenders. Research
Report No. R-41. Ottawa, Ontario: Correctional Research and Development, Correctional Service Canada.
Song, L. & Lieb, R. (1995). Washington state sex offenders: Overview of recidivism studies. Olympia: Washington State Institute for
Public Policy.
Allam, J. (1999). Sex offender re-conviction: Treated vs. untreated offenders. West Midlands Probation Service Sex Offender
Treatment Programme.
Baird, C., Wagner, D., Decomo, B., & Aleman, T. (1994). Evaluation of the effectiveness of supervision and community rehabilitation
programs in Oregon. Oakland, CA: National Council on Crime and Delinquency.
Marshall, W. L., Eccles, A., & Barbaree, H.E. (1991). The treatment of exhibitionists: A focus on sexual deviance versus cognitive and
relationship features. Behaviour Research and Therapy, 29(2): 129-135.
McGrath, R. J., Hoke, S. E., & Vojtisek, J. E. (1998). Cognitive-behavioral treatment of sex offenders: A treatment comparison and
long-term follow-up study. Criminal Justice and Behavior, 25: 203-225.
Procter, E. (1996). A five-year outcome evaluation of a community-based treatment programme for convicted sexual offenders run by
the probations service. The Journal of Sexual Aggression, 2(1): 3-16.
Drake, E. (2003). Class I impacts: Work during incarceration and its effects on post-prison employment patterns and recidivism.
Olympia: Washington State Department of Corrections.
Saylor, W. G., & Gaes, G. G. (1996). PREP: A study of "rehabilitating" inmates through industrial work participation, and vocational and
apprenticeship training. Washington, DC: U.S. Federal Bureau of Prisons.
Newmark, L., Rempel, M., Diffily, K., Kane, K. M. (2001). Specialized felony domestic violence courts: Lessons on implementations and
impacts from the Kings County experience. Washington DC: Urban Institute.

17

Program Grouping

Study

Domestic Violence Courts,
continued
Drug Treatment in Jail

Grover, A. R., MacDonald, J. M., Alpert, G. P., Geary, I. A., Jr. (2003). The Lexington County domestic violence courts: A partnership
and evaluation. Columbia, SC: University of South Carolina. (National Institute of Justice Grant 2000-WT-VX-0015).
Dugan J. R. & Everett, R. S. (1998). An experimental test of chemical dependency therapy for jail inmates. International Journal of
Offender Therapy and Comparative Criminology, 42(4): 360-368.
Knight, K., Simpson, D. D., & Hiller, M. L. (2003). Outcome assessment of correctional treatment (OACT). Fort Worth: Texas Christian
University, Institute of Behavioral Research. (National Institute of Justice Grant 99-RT-VX-KO27). Retrieved December 27, 2005, from
http://www.ncjrs.org/pdffiles1/nij/grants/199368.pdf
Peters, R. H., Kearns, W. D., Murrin, M. R., Dolente, A. S., & May, R. L. (1993). Examining the effectiveness of in-jail substance abuse
treatment. Journal of Offender Rehabilitation, 19: 1-39.
Taxman, F. S. & Spinner, D. L. (1997). Jail addiction services (JAS) demonstration project in Montgomery County, Maryland: Jail and
community based substance abuse treatment program model. College Park: University of Maryland.
Tunis, S., Austin, J., Morris, M., Hardyman, P. & Bolyard, M. (1996). Evaluation of drug treatment in local corrections. Washington DC:
National Institute of Justice.
Baird, C., Wagner, D., Decomo, B., & Aleman, T. (1994). Evaluation of the effectiveness of supervision and community rehabilitation
programs in Oregon. Oakland, CA: National Council on Crime and Delinquency.
California Department of Corrections. (1997). Los Angeles prison parole network: An evaluation report. Sacramento: State of California.
Hepburn, J. R. (2005). Recidivism among drug offenders following exposure to treatment. Criminal Justice Policy Review, 16(2): 237-259.

Drug Treatment in the
Community

Education/Cognitive-Behavioral
Treatment for Domestic Violence
Offenders

Lattimore, P. K., Krebs, C. P., Koetse, W., Lindquist C., & Cowell A. J. 2005. Predicting the effect of substance abuse treatment on
probationer recidivism. Journal of Experimental Criminology, 1(2): 159-189.
Chen, H., Bersani, C., Myers, S. C., & Denton, R. (1989). Evaluating the effectiveness of a court sponsored abuser treatment program.
Journal of Family Violence, 4(4): 309-322.
Davis, R. C., Taylor, B. G., & Maxwell, C. D. (2000). Does batterer treatment reduce violence? A randomized experiment in Brooklyn.
New York, NY: Victim Services Research. Retrieved December 27, 2005, from http://www.ncjrs.gov/pdffiles1/nij/grants/180772.pdf
Dunford, F. W. (2000). The San Diego Navy experiment: An assessment of interventions for men who assault their wives. Journal of
Consulting and Clinical Psychology, 68(3): 468-476.
Feder, L., & Forde, D. R. (2000). A test of the efficacy of court-mandated counseling for domestic violence offenders: The Broward
experiment. Memphis, TN: University of Memphis. (National Institute of Justice Grant 96-WT-NX-0008).
Gordon, J. A. & Moriarty, L. J. (2003). The effects of domestic violence batterer treatment on domestic violence recidivism: The
Chesterfield County experience. Criminal Justice and Behavior, 30(1): 118-134.
Harrell, A. (1991). Evaluation of court-ordered treatment for domestic violence offenders. Washington, DC: Urban Institute.

Electronic Monitoring

Employment Training and Job
Assistance Programs in the
Community

Labriola, M., Rempel, M., & Davis, R.C. (2005). Testing the effectiveness of batterer programs and judicial monitoring. Results from a
randomized trial at the Bronx misdemeanor domestic violence court. New York, NY: Center for Court Innovation. (National Institute of
Justice Grant 2001-WT-BX-0506). Draft sent to M. Miller by M. Rempel.
Baird, C., Wagner, D., Decomo, B., & Aleman, T. (1994). Evaluation of the effectiveness of supervision and community rehabilitation
programs in Oregon. Oakland, CA: National Council on Crime and Delinquency.
Bonta, J., Wallace-Capretta, S., & Rooney, J. (2000). Can electronic monitoring make a difference? An evaluation of three Canadian
programs. Crime and Delinquency, 46(1): 61-75.
Dodgson, K., Goodwin, P., Howard, P., Llewellyn-Thomas, S., Mortimer, E., Russell, N., & Weiner, M. (2001). Electronic monitoring of
released prisoners: An evaluation of the home detention curfew scheme. Home Office Research Study 222. London: Home Office.
Finn, M. A. & Muirhead-Steves, S. (2002). The effectiveness of electronic monitoring with violent male parolees. Justice Quarterly,
19(2): 293-312.
Jolin, A. & Stipak, B. (1992). Drug treatment and electronically monitored home confinement: An evaluation of a community-based
sentencing option. Crime and Delinquency, 38: 158-170.
Jones, M., & Ross, D. L. (1997). Electronic house arrest and boot camp in North Carolina: Comparing recidivism. Criminal Justice
Policy Review, 8(4): 383-404.
Klein-Saffran, J. (1993). Electronic monitoring versus halfway houses: A study of federal offenders. (Doctoral dissertation. University
Of California, Berkeley, 1993, UMI No. 9327445).
Petersilia, J., Turner, S., & Deschenes, E. P. (1992). Intensive supervision programs for drug offenders. In J. M. Byrne, A. J. Lurigio, &
J. Petersilia (Eds.), Smart sentencing: The emergency of intermediate sanctions (pp. 18-37). Newbury Park, CA: Sage.
Petersilia, J. & Turner, S. (1990). Intensive supervision for high-risk probationers: Findings from three California experiments. Santa
Monica, CA: RAND.
Sugg, D., Moore, L., & Howard, P. (2001). Electronic monitoring and offending behaviour—reconviction results for the second year of
trials of curfew orders. Home Office Research Findings 141. London: Home Office.
Anderson, D. B. & Schumacker, R. E. (1986). Assessment of job training programs. Journal of Offender Counseling, Services, &
Rehabilitation, (10): 41-49.
Beck, J. (1981). Employment, community treatment center placement and recidivism: A study of released federal offenders. Federal
Probation, (45): 3-8.
Beck, L. (1979). An evaluation of federal community treatment centers. Federal Probation, (43): 36-40.
Berk, R. A., Lenihan, K. J., & Rossi, P. H. (1980). Crime and poverty: Some experimental evidence from ex-offenders. American
Sociological Review, (45): 766-786.
Bloom, H., Orr, L. O., Cave, G., Bell, S. H., Doolittle, F., & Lin, W. (1994). The national JTPA study. Overview: Impacts, benefits and
costs of Title II-A. Cambridge, MA: Abt Associates Inc.
Cave, G., Bos, H., Doolittle, F., & Toussaint, C. (1993). Jobstart: Final report on a program for school dropouts. New York, NY:
Manpower Demonstration and Research Corporation.
Mallar, C. D., & Thornton, C. (1978). Transitional aid for released prisoners: Evidence from the life experiment. The Journal of Human
Resources, XIII(2): 208-236.
Menon, R., Blakely, C., Carmichael, D., & Snow, D. (1995). Making a dent in recidivism rates: Impact of employment on minority exoffenders. In G. E. Thomas (Ed.). Race and ethnicity in America: Meeting the challenge in the 21st century (pp. 279-293). Washington, DC:
Taylor and Francis. See also, Finn, P. (1998). Texas' Project RIO (re-integration of offenders). Washington, DC: U.S. Department of
Justice.
Milkman, R. H. (1985). Employment services for ex-offenders field test--detailed research results. McLean, VA: Lazar Institute.
Rossman, S., Sridharan, S., Gouvis, C., Buck, J., & Morley, E. (1999). Impact of the opportunity to succeed program for substanceabusing felons: Comprehensive final report. Washington, DC: Urban Institute.
Schochet, P. Z., Burghardt, J., & Glazerman, S. (2001). National job corps study: The impacts of job corps on participants' employment
and related outcomes. Princeton, NJ: Mathematica Policy Research, Inc. (U.S. Department of Labor, Employment and Training
Administration contract No. K-4279-3-00-80-30).

18

Program Grouping

Study

Employment Training and Job
Assistance Programs in the
Community, continued
Faith-Based Programs for
General Offenders

Uggen, C. (2000). Work as a turning point in the life course of criminals: A duration model of age, employment, and recidivism.
American Sociological Review, 67(4): 529–546.

In-Prison Therapeutic Communities
With Community Aftercare for Drug
Involved Offenders

In-Prison Therapeutic
Communities Without
Community Aftercare for Drug
Involved Offenders

Intensive Supervision Of Sex
Offenders in the Community

Intensive Supervision:
Surveillance-Oriented
Approaches

Burnside, J., Adler, J., Loucks, N., & Rose, G. (2001). Kainos community in prisons: Report of an evaluation. RDS OLR 11/01.
Presented to Research Development and Statistics Directorate, Home Office, HM Prison Service England and Wales and Kainos
Community. Retrieved December 27, 2005, from http://www.homeoffice.gov.uk/rds/pdfs/kainos_finalrep.pdf
Johnson, B.R. (2004). Religious programs and recidivism among former inmates in prison fellowship programs: A long-term follow-up
study. Justice Quarterly, 21(2): 329-354.
O'Connor, T., Su, Y., Ryan, P., Parikh, C., & Alexander, E. (1997). Detroit transition of prisoners: Final evaluation report. Center for
Social Research, MD
Trusty, B., & Eisenberg, M. (2003). Initial process and outcome evaluation of the innerchange freedom initiative: The faith-based prison
program in TDCJ. Austin, TX: Criminal Justice Policy Council. Retrieved December 27, 2005, from
http://www.cjpc.state.tx.us/reports/adltrehab/IFIInitiative.pdf
Wilson, L.C., Wilson, C., Drummond, S. R., & Kelso, K. (2005). Promising effects on the reduction of criminal recidivism: An Evaluation
of the Detroit transition of prisoner's faith based initiative. Draft report emailed to Marna Miller by Joe Williams.
Field, G. (1985). The cornerstone program: A client outcome study. Federal Probation, 49: 50-55.

Inciardi, J. A., Martin S. S., & Butzin, C. A. (2004). Five-year outcomes of therapeutic community treatment of drug-involved offenders
after release from prison. Crime and Delinquency, 50(1): 88-107.
Knight, K., Simpson, D. D., & Hiller, M. L. (1999). Three-year reincarceration outcomes for in-prison therapeutic community treatment in
Texas. The Prison Journal, 79(3): 337-351.
Prendergast, M. L., Hall, E. A., Wexler, H. K., Melnick, G., & Cao, Y. (2004). Amity prison-based therapeutic community: 5-year
outcomes. The Prison Journal, 84(1): 36-60.
Swartz, J. A., Lurigo, A. J., & Slomka, S. A. (1996). The impact of IMPACT: An assessment of the effectiveness of a jail-based
treatment program. Crime and Delinquency, 42(4): 553-573.
Wexler, H. K., Falkin, G. H., Lipton, D. S., & Rosenblum, A. B. (1992). Outcome evaluation of a prison therapeutic community for
substance abuse treatment. Criminal Justice and Behavior, 17(1): 71-92.
Belenko, S., Foltz, C., Lang, M. A., & Sun, H. (2004). Recidivism among high-risk drug felons: A longitudinal analysis following
residential treatment. Journal of Offender Rehabilitation, 40(1/2): 105-132.

Gransky, L. A. & Jones, R. J. (1995). Evaluation of the post-release status of substance abuse program participants: The impact
incarceration program at Dixon Springs and the Gateway substance abuse program at Dwight Correctional Center. Chicago: Illinois
Criminal Justice Authority Report.
Klebe, K. J., & O'Keefe, M. (2004). Outcome evaluation of the crossroads to freedom house and peer I therapeutic communities.
Colorado Springs: University of Colorado. (National Institute of Justice Grant 99-RT-VX-K021).
Mosher, C., & Phillips, D. (2002). Program evaluation of the pine lodge pre-release residential therapeutic community for women
offenders in Washington State, final report. Pullman, WA: Washington State University. (National Institute of Justice Grant 99-RT-VX-K001).
Oregon Department of Corrections. (1996). Evaluation of the powder river and turning point alcohol and drug treatment programs.
Salem, OR: Oregon Department of Corrections.
Welsh, W. N. (2003). Evaluation of prison-based therapeutic community drug treatment programs in Pennsylvania. Philadelphia, PA:
Pennsylvania Commission on Crime and Delinquency.
Stalans, L.J., Seng, M., Yarnold, P., Lavery, T., & Swartz, J. (2001). Process and initial evaluation of the Cook County adult probation
department's sex offender program: Final and summary report for the period of June, 1997 to June, 2000. Chicago: Illinois Criminal
Justice Information Authority. Retrieved on December 28, 2005 from
http://www.icjia.state.il.us/public/pdf/researchreports/An%20Implementation_Project%20in%20Cook%20County.pdf
Stalans, L.J., Seng, M., & Yarnold, P.R. (2002). Long-term impact evaluation of specialized sex offender probation programs in Lake,
DuPage and Winnebago Counties. Chicago: Illinois Criminal Justice Information Authority. Retrieved on December 28, 2005, from
http://www.icjia.state.il.us/public/pdf/ResearchReports/Long-termDuPageWinnebago.pdf
Bagdon, W. & Ryan, J. E. (1993). Intensive supervision of offenders on prerelease furlough: An evaluation of the Vermont experience.
Forum on Corrections Research, 5(2). Retrieved on December 28, 2005, from http://www.cscscc.gc.ca/text/pblct/forum/e052/e052j_e.shtml
Brown, K. (1999). Intensive supervision probation: The effects of supervision philosophy on intensive probationer recidivism. (Doctoral
dissertation, University of Cincinnati, 1993). Retrieved on December 28, 2005, from
http://www.uc.edu/criminaljustice/graduate/Dissertations/KBrown.PDF
Byrne, J. M. & Kelly, L. (1989). Restructuring probation as an intermediate sanction: An evaluation of the Massachusetts intensive
probation supervision program, Executive summary. Final Report to the National Institute of Justice, Research Program on the
Punishment and Control of Offenders. Washington, DC: National Institute of Justice.
Deschenes, E. P., Turner, S., & Petersilia, J. (1995). A dual experiment in intensive community supervision: Minnesota's prison
diversion and enhanced supervised release programs. Prison Journal, 75(3): 330-357.
Erwin, B. S., & Bennett, L. A. (1987). New dimensions in probation: Georgia's experience with intensive probation supervision.
Research in Brief. Washington, DC: National Institute of Justice.
Fulton, B., Stichman, A., Latessa, E., & Lawrence, T. (1998). Evaluating the prototypical ISP, Iowa correctional services second judicial
district. Final Report. Cincinnati: Division of Criminal Justice, University of Cincinnati.
Johnson, G. & Hunter, R. M. (1995). Evaluation of the specialized drug offender program. In R. R Ross & R. D. Ross (Eds.), Thinking
straight: The reasoning and rehabilitation program for delinquency prevention and offender rehabilitation (pp. 214-234). Ottawa,
Canada: Air Training and Publications.
Lichtman, C. M. & Smock, S. M. (1981). The effects of social services on probationer recidivism: A field experiment. Journal of
Research in Crime and Delinquency, 18: 81-100.
Pearson, F. S. (1988). Evaluation of New Jersey's intensive supervision program. Crime and Delinquency, 34(4): 437-448.
Petersilia, J., Turner, S., & Deschenes, E. P. (1992). Intensive supervision programs for drug offenders. In J. M. Byrne, A. J. Lurigio, &
J. Petersilia (Eds.), Smart sentencing: The emergency of intermediate sanctions (pp. 18-37). Newbury Park, CA: Sage.
Petersilia, J. & Turner, S. (1990). Intensive supervision for high-risk probationers: Findings from three California experiments. Santa
Monica, CA: RAND.
Smith, L. G. & Akers, R. L. (1993). A comparison of recidivism of Florida's community control and prison: A five-year survival analysis.
Journal of Research in Crime and Delinquency, 30(3): 267-292.
Stichman, A., Fulton, B., Latessa, E., & Lawrence, T. (1998). Evaluating the prototypical ISP, Hartford intensive supervision unit Connecticut
office of adult probation administrative office of the courts. Final Report. Cincinnati: Division of Criminal Justice, University of Cincinnati.
Turner, S., & Petersilia, J. (1992). Focusing on high-risk parolees: Experiment to reduce commitments to the Texas Department of
Corrections. Journal of Research in Crime and Delinquency, 29(1): 34-61.

19

Program Grouping

Study

Intensive Supervision: TreatmentOriented Approaches

Deschenes, E. P., Turner, S., & Petersilia, J. (1995). A dual experiment in intensive community supervision: Minnesota's prison
diversion and enhanced supervised release programs. Prison Journal, 75(3): 330-357.
Hanley, D. (2002). Risk differentiation and intensive supervision: A meaningful union? (Doctoral dissertation, University of Cincinnati,
2002, UMI No. 3062606).
Harrell, A., Mitchell, O., Hirst, A., Marlow, D., & Merrill, J. C. (2002). Breaking the cycle of drugs and crime: Findings from the
Birmingham BTC demonstration. Criminology and Public Policy, 1(2): 189-216.
Harrell, A., Mitchell, O., Merrill, J. C., & Marlowe, D. B. (2003). Evaluation of breaking the cycle. Washington, DC: Urban Institute.
(National Institute of Justice Grant 97-IJ-CX-0013).
Harrell, A., Roman, J., Bhati, A., & Parthasarathy, B. (2003). The impact evaluation of the Maryland break the cycle initiative.
Washington, DC: Urban Institute.
Paparozzi, M. A. (1994). A comparison of the effectiveness of an intensive parole supervision program with traditional parole
supervision. (Doctoral Dissertation. Rutgers the State University of New Jersey – New Brunswick, 1994, UMI No. 9431121).
Petersilia, J. & Turner, S. (1990). Intensive supervision for high-risk probationers: Findings from three California experiments. Santa
Monica, CA: RAND.
Broner, N., Lattimore, P. K., Cowell, A. J., & Schlenger, W. E. (2004). Effects of diversion on adults with co-occurring mental illness with
substance use: Outcomes from a national multi-site study. Behavioral Sciences and the Law, (22): 519-541

Jail Diversion: Pre- and PostBooking Programs for MICA
Offenders

Medical Treatment of Sex
Offenders
Psychotherapy for Sex
Offenders

Regular Supervision Compared
to No Supervision
Restorative Justice Programs for
Lower Risk Adult Offenders

Therapeutic Community
Programs for MICA Offenders

Vocational Education in Prison

Work Release Programs From
Prison

Correctional Industries Programs
in Prison

Day Fines (Compared to
Standard Probation)

Christy, A., Poythress, N. G., Boothroyd, R. A., Petrila, J., Mehra, S. (2005). Evaluating the efficiency and community safety goals of
the Broward County Mental Health Court. Behavioral Sciences and the Law, 23(2):227-243.
Cosden, M., Ellens, J., Schnell, J. & Yamini-Diouf, J. (2004). Evaluation of the Santa Barbara County mental health treatment court with
intensive case management. Santa Barbara: University of California.
Steadman, H. J., Cocozza, J. J., & Veysey, B. M. (1999). Comparing outcomes for diverted and nondiverted jail detainees with mental
illnesses. Law and Human Behavior, 23(6): 615-627.
Wille, R., & Beier, K. M. (1989). Castration in Germany. Annals of Sex Research, 2: 103-133.
Hanson, R. K., Steffy, R. A. & Gauthier, R. (1993). Long-term recidivism of child molesters. Journal of Consulting and Clinical
Psychology, 61: 646-652.
Nutbrown, V., and Stasiak, E. (1987). A retrospective analysis of O.C.I. cost effectiveness 1977-1981. (Ontario Correctional Institute
Research Monograph No. 2). Brampton, Ontario, Canada: Ontario Ministry of Correctional Services Ontario Correctional Institute.
Romero, J. J. & Williams, L. M. (1983). Group psychotherapy and intensive probation supervision with sex offenders: A comparative
study. Federal Probation, 47: 36-42.
Solomon, A. L., Kachnowski, V., Bhati, A. (2005). Does parole work? Analyzing the impact of postprison supervision on rearrest
outcomes. Washington, DC: Urban Institute.
Bonta, J., Wallace-Capretta, S., & Rooney, J. (2000). A quasi-experimental evaluation of an intensive rehabilitation supervision
program. Criminal Justice and Behavior, 27(3): 312-329.
Dignan, J. (1990). Repairing the damage: An evaluation of an experimental adult reparation scheme in Kettering, Northamptonshire.
Sheffield, UK: Centre for Criminological and Legal Research, Faculty of Law, University of Sheffield.
Crime and Justice Research Centre Victoria University of Wellington, & Triggs, S. (2005). New Zealand court-referred restorative
justice pilot: Evaluation. Wellington, New Zealand: Ministry of Justice. Retrieved on December 27, 2005, from
http://www.justice.govt.nz/pubs/reports/2005/nz-court-referred-restorative-justice-pilot-evaluation
Paulin, J., Kingi, V., Lash, B. (2005). The Rotorua second chance community-managed restorative justice programme: An evaluation.
Wellington, New Zealand: Ministry of Justice. Retrieved on December 27, 2005, from
http://www.justice.govt.nz/pubs/reports/2005/rotorua-second-chance-community-managed-restorative-justice/index.html
Paulin, J., Kingi, V., Lash, B. (2005). The Wanganui community-managed restorative justice programme: An evaluation. Wellington,
New Zealand: Ministry of Justice. Retrieved on December 27, 2005, from http://www.justice.govt.nz/pubs/reports/2005/wanganuicommunity-managed-restorative-justice
Rugge, T., Bonta, J., & Wallace-Capretta, S. (2005). Evaluation of the collaborative justice project: A restorative justice program for
serious crime. Ottawa, Ontario, Canada: Public Safety and Emergency Preparedness Canada.
Sacks, S., Sacks, J. Y., McKendrick, K., Banks, S., & Stommel, J. (2004). Modified TC for MICA offenders: Crime outcomes. Behavioral
Sciences and the Law, 22(4): 477-501.
Van Stelle, K. R., & Moberg, D. P. (2004). Outcome data for MICA clients after participation in an institutional therapeutic community.
Journal of Offender Rehabilitation, 39(1): 37-62.
Callan, V. & Gardner, J. (2005). Vocational education and training provision and recidivism in Queensland correctional institutions.
Queensland, Australia: National Center for Vocational Education Research (NCVER).
Lattimore, P. K., Witte, A. D., & Baker, J. R. (1990). Experimental assessment of the effect of vocational training on youthful property
offenders. Evaluation Review, 14(2): 115-133.
Saylor, W. G., & Gaes, G. G. (1996). PREP: A study of "rehabilitating" inmates through industrial work participation, and vocational and
apprenticeship training. Washington, DC: U.S. Federal Bureau of Prisons.
Jeffrey, R. & Woolpert, S. (1974). Work furlough as an alternative to incarceration. The Journal of' Criminology, 65(3): 405-415.
LeClair, D. P. & Guarino-Ghezzi, S. (1991). Does incapacitation guarantee public safety? Lessons from the Massachusetts furlough
and prerelease program. Justice Quarterly, (8)1: 9-36
Turner, S. M. & Petersilia, J. (1996). Work release in Washington: Effects on recidivism and corrections costs. Prison Journal, 76(2): 138164.
Waldo, G. & Chiricos, T. (1977). Work release and recidivism: An empirical evaluation of a social policy. Evaluation Quarterly, 1(1): 87108.
Maguire, K. E., Flanagan, T. J., & Thornberry, T. P. (1988). Prison labor and recidivism. Journal of Quantitative Criminology, 4(1): 3-18.
Smith, C. J., Bechtel, J., Patricky, A., & Wilson-Gentry, L. (2005). Correctional industries preparing inmates for re-entry: Recidivism &
post-release employment. Final draft report. Email from author.
Turner, S. & Greene, J. (1999). The FARE probation experiment: implementation and outcomes of day fines for felony offenders in
Maricopa County. The Justice System Journal, 21(1): 1-21.

Document No. 06-01-1201
Washington State
Institute for
Public Policy
20

The Washington State Legislature created the Washington State Institute for Public Policy in 1983. A Board of Directors—representing the
legislature, the governor, and public universities—governs the Institute and guides the development of all activities. The Institute’s mission is to carry
out practical research, at legislative direction, on issues of importance to Washington State.

 

 

The Habeas Citebook Ineffective Counsel Side
PLN Subscribe Now Ad 450x450
Federal Prison Handbook - Side