This is a limited preview of the Guidebook for Peer Respite Self-Evaluation. You can access the full version, which outlines a step-by-step process for designing your evaluation of a peer respite, by logging in or signing up.

This Guidebook for Peer Respite Self-Evaluation: Practical Steps and Tools can be used to document program operations and outcomes and to build evidence for the efficacy of peer respites. It is intended for use by peer respite program staff, managers, and administrators.

In a world of limited resources, conducting
evaluation Evaluation: A systematic and objective assessment of an on-going or completed project, program or policy. Evaluations are undertaken to (a) improve the performance of existing interventions or policies, (b) asses their effects and impacts, and (c) inform decisions about future programming. Evaluations are formal analytical endeavors involving systematic collection and analysis of qualitative and quantitative information.
can be a challenge. We created this guide in response to frequent requests for practical, low-cost or no-cost tools that can be used by programs to evaluate themselves.

This toolkit includes recommendations on best practices in
self-evaluation Self-Evaluation: An evaluation by those who are entrusted with the design and implementation of a project or program.
data monitoring Data Monitoring: The performance and analysis of routine measurements to detect changes in status. Monitoring is used to inform managers about the progress of an ongoing intervention or program, and to detect problems that may be able to be addressed through corrective actions.
based on techniques used by other peer respites and in the world of program evaluation. The Guidebook for Peer Respite Self-Evaluation: Practical Steps and Tools provides basic, practical guidance on developing a
logic model Logic Model: A logic model, often a visual representation, provides a road map showing the sequence of related events connecting the need for a planned program with the programs’ desired outcomes and results.
, identifying
outcomes Outcome: A result or effect that is caused by or attributable to the program.
, selecting measures/
indicators Indicator: Quantitative or qualitative variable that provides reliable means to measure a particular phenomenon or attribute.
, collecting and analyzing
data Data: Information collected in the process of evaluation. Data gathered during an evaluation are manipulated and analyzed to yield findings that serve as the basis for conclusions and recommendations.
, and reporting
findings Findings: Factual statements about a project or program which are based on empirical evidence. Findings include statements and visual representations of the data, but not interpretations, judgments or conclusions about what the findings mean or imply.

Please note: On smaller screens and mobile, you may experience a smoother transition to the next section if you close the current section first. There are back-to-top links placed at the bottom of each section for your convenience.

  • Introduction
  • Why Self-Evaluate? Knowledge is power!

    Most funders require some kind of data collection and reporting. However, there are many more reasons to collect and report data about peer respites. Evaluations provide information about a program’s impact and potential. Gathering information about program impact can help peer respite leadership demonstrate that their programs are really making a difference in peoples’ lives.

    Sharing information with the community can be a powerful way to educate the public about peer respites and encourage community buy-in. This information supports the community to make decisions about the program.

    The information you gather in a self-evaluation can also be used for quality improvement purposes: Understanding what works well and what doesn’t is a first step in ensuring the peer respite is reaching its goals and objectives.

    Finally, by documenting the impact of your peer respite, you have a chance to contribute to the evidence base – research and results that show peer respites have a positive impact on peoples’ lives and on the communities in which they operate. As peer respites continue to expand throughout the country, there is an increasing need to demonstrate their impact. Information that shows the effectiveness of peer respites can help ensure that programs like this receive ongoing funding. This information also helps to make a case for opening new peer respites.

    Back to Introduction

  • What’s New in this Version of the Toolkit

    In 2014, Live & Learn and Human Services Research Institute, with support from the National Empowerment Center, published the Toolkit for Evaluating Peer Respites. Through our consulting and research since then, we found that programs, governments, and advocates would benefit from a revision to the Toolkit. Specifically, this updated version focuses on concrete, actionable recommendations on “best practices” in self-evaluation (or other low-cost/low-resource approaches).

    Whereas the 2014 Toolkit explored a variety of options for formal and informal evaluation of peer respites, this version is focused on establishing a shared framework for self-evaluation that can be used by peer respite staff on an ongoing basis without extensive hands-on involvement of researchers. We advocate for a shared framework because consistency in measurement across peer respites helps build stronger evidence for their real-world effectiveness!

    Back to Introduction

  • Key Considerations for Evaluation: Dos, Don’ts, and Ethics

    This toolkit is not meant to be a comprehensive how-to guide for evaluation. Rather, it is meant to provide a quick overview of essential information for conducting simple evaluations of peer respites. This section outlines a few basic pointers to keep in mind as you put your evaluation together.

    Further along in this toolkit we discuss Ethical Considerations for evaluation in depth. Even though not a formal “step” in the evaluation process, we strongly encourage you to review these – and keep ethics in mind – at every stage of evaluation.

  • Step 1: Planning and Preparation

    The kind of data you collect – and how you collect and analyze it – depends on what you want to know about your peer respite.

  • Laying Out Goals

    The very first step is to clearly state your program’s goals. By doing so, you define what the program is meant to be doing and how it could be improved. Peer respites’ goals are wide-ranging and include fostering recovery and empowering the guest, promoting community participation and togetherness, and supporting guests to make choices.

    Some goals are related to outcomes (such as improving guests’ lives), and others might be related to program activities (such as providing high-quality support or reaching underrepresented groups).

    Below are a set of core goals that are common to peer respites around the country:

    • Provide recovery-oriented services
    • Offer high-quality peer support
    • Create a safe and welcoming environment
    • Ensure the people who use the respite are representative of the community in terms of race, ethnicity, culture, age, gender identity, sexual orientation, etc.
    • Connect people with useful resources after leaving respite
    • Promote stronger grassroots advocacy and a more recovery-oriented mental health system
    • Enhance self-sufficiency, engagement in self-advocacy, activation, social connectedness, physical and mental health, and quality of life
    • Reduce or avoid use of psychiatric emergency services and inpatient hospitalization

    Back to Step 1

  • Acknowledgements

    The Guidebook for Peer Respite Self-Evaluation: Practical Steps and Tools was created by Laysha Ostrow, PhD of Live & Learn, Inc. and Bevin Croft, PhD of Human Services Research Institute.

    Contact: Laysha at or Bevin at

    We would like to thank everyone who contributed to the creation of this guidebook, including Carina Smith of Live & Learn, Inc. and Tori Morrison of California Polytechnic State University, San Luis Obispo, and the reviewers:

    • Faith Boersma, State of Wisconsin Department of Health Services
    • Michael Lane, Consumers Self Help Center
    • Sera Davidow, Western Massachusetts Recovery Learning Community
    • Adrian Camp, 2nd Story Peer Respite
    • Elizabeth Siantz, University of California, San Diego
    • Yana Jacobs, Foundation for Excellence in Mental Health Care
    • Danny van Leeuwen, Health Hats
    • Ben Cichocki, Human Services Research Institute
    • Virginia Mulkern, Human Services Research Institute

    This project was funded in part under a contract with the National Empowerment Center.