This Guidebook for Peer Respite Self-Evaluation: Practical Steps and Tools can be used to document program operations and outcomes and to build evidence for the efficacy of peer respites. It is intended for use by peer respite program staff, managers, and administrators.

In a world of limited resources, conducting
evaluation Evaluation: A systematic and objective assessment of an on-going or completed project, program or policy. Evaluations are undertaken to (a) improve the performance of existing interventions or policies, (b) asses their effects and impacts, and (c) inform decisions about future programming. Evaluations are formal analytical endeavors involving systematic collection and analysis of qualitative and quantitative information.
can be a challenge. We created this guide in response to frequent requests for practical, low-cost or no-cost tools that can be used by programs to evaluate themselves.

This toolkit includes recommendations on best practices in
self-evaluation Self-Evaluation: An evaluation by those who are entrusted with the design and implementation of a project or program.
and
data monitoring Data Monitoring: The performance and analysis of routine measurements to detect changes in status. Monitoring is used to inform managers about the progress of an ongoing intervention or program, and to detect problems that may be able to be addressed through corrective actions.
based on techniques used by other peer respites and in the world of program evaluation. The Guidebook for Peer Respite Self-Evaluation: Practical Steps and Tools provides basic, practical guidance on developing a
logic model Logic Model: A logic model, often a visual representation, provides a road map showing the sequence of related events connecting the need for a planned program with the programs’ desired outcomes and results.
, identifying
outcomes Outcome: A result or effect that is caused by or attributable to the program.
, selecting measures/
indicators Indicator: Quantitative or qualitative variable that provides reliable means to measure a particular phenomenon or attribute.
, collecting and analyzing
data Data: Information collected in the process of evaluation. Data gathered during an evaluation are manipulated and analyzed to yield findings that serve as the basis for conclusions and recommendations.
, and reporting
findings Findings: Factual statements about a project or program which are based on empirical evidence. Findings include statements and visual representations of the data, but not interpretations, judgments or conclusions about what the findings mean or imply.

Please note: On smaller screens and mobile, you may experience a smoother transition to the next section if you close the current section first. There are back-to-top links placed at the bottom of each section for your convenience.


  • Introduction
  • Why Self-Evaluate? Knowledge is power!

    Most funders require some kind of data collection and reporting. However, there are many more reasons to collect and report data about peer respites. Evaluations provide information about a program’s impact and potential. Gathering information about program impact can help peer respite leadership demonstrate that their programs are really making a difference in peoples’ lives.

    Sharing information with the community can be a powerful way to educate the public about peer respites and encourage community buy-in. This information supports the community to make decisions about the program.

    The information you gather in a self-evaluation can also be used for quality improvement purposes: Understanding what works well and what doesn’t is a first step in ensuring the peer respite is reaching its goals and objectives.

    Finally, by documenting the impact of your peer respite, you have a chance to contribute to the evidence base – research and results that show peer respites have a positive impact on peoples’ lives and on the communities in which they operate. As peer respites continue to expand throughout the country, there is an increasing need to demonstrate their impact. Information that shows the effectiveness of peer respites can help ensure that programs like this receive ongoing funding. This information also helps to make a case for opening new peer respites.

    Back to Introduction

  • What’s New in this Version of the Toolkit

    In 2014, Live & Learn and Human Services Research Institute, with support from the National Empowerment Center, published the Toolkit for Evaluating Peer Respites. Through our consulting and research since then, we found that programs, governments, and advocates would benefit from a revision to the Toolkit. Specifically, this updated version focuses on concrete, actionable recommendations on “best practices” in self-evaluation (or other low-cost/low-resource approaches).

    Whereas the 2014 Toolkit explored a variety of options for formal and informal evaluation of peer respites, this version is focused on establishing a shared framework for self-evaluation that can be used by peer respite staff on an ongoing basis without extensive hands-on involvement of researchers. We advocate for a shared framework because consistency in measurement across peer respites helps build stronger evidence for their real-world effectiveness!

    Back to Introduction


For free access to the full version of the Guidebook for Peer Respite Self-Evaluation, sign-up here: