This Guidebook for Peer Respite Self-Evaluation: Practical Steps and Tools can be used to document program operations and outcomes and to build evidence for the efficacy of peer respites. It is intended for use by peer respite program staff, managers, and administrators.

In a world of limited resources, conducting
evaluation Evaluation: A systematic and objective assessment of an on-going or completed project, program or policy. Evaluations are undertaken to (a) improve the performance of existing interventions or policies, (b) asses their effects and impacts, and (c) inform decisions about future programming. Evaluations are formal analytical endeavors involving systematic collection and analysis of qualitative and quantitative information.
can be a challenge. We created this guide in response to frequent requests for practical, low-cost or no-cost tools that can be used by programs to evaluate themselves.

This toolkit includes recommendations on best practices in
self-evaluation Self-Evaluation: An evaluation by those who are entrusted with the design and implementation of a project or program.
and
data monitoring Data Monitoring: The performance and analysis of routine measurements to detect changes in status. Monitoring is used to inform managers about the progress of an ongoing intervention or program, and to detect problems that may be able to be addressed through corrective actions.
based on techniques used by other peer respites and in the world of program evaluation. The Guidebook for Peer Respite Self-Evaluation: Practical Steps and Tools provides basic, practical guidance on developing a
logic model Logic Model: A logic model, often a visual representation, provides a road map showing the sequence of related events connecting the need for a planned program with the programs’ desired outcomes and results.
, identifying
outcomes Outcome: A result or effect that is caused by or attributable to the program.
, selecting measures/
indicators Indicator: Quantitative or qualitative variable that provides reliable means to measure a particular phenomenon or attribute.
, collecting and analyzing
data Data: Information collected in the process of evaluation. Data gathered during an evaluation are manipulated and analyzed to yield findings that serve as the basis for conclusions and recommendations.
, and reporting
findings Findings: Factual statements about a project or program which are based on empirical evidence. Findings include statements and visual representations of the data, but not interpretations, judgments or conclusions about what the findings mean or imply.

Please note: On smaller screens and mobile, you may experience a smoother transition to the next section if you close the current section first. There are back-to-top links placed at the bottom of each section for your convenience.


  • Introduction
  • Why Self-Evaluate? Knowledge is power!

    Most funders require some kind of data collection and reporting. However, there are many more reasons to collect and report data about peer respites. Evaluations provide information about a program’s impact and potential. Gathering information about program impact can help peer respite leadership demonstrate that their programs are really making a difference in peoples’ lives.

    Sharing information with the community can be a powerful way to educate the public about peer respites and encourage community buy-in. This information supports the community to make decisions about the program.

    The information you gather in a self-evaluation can also be used for quality improvement purposes: Understanding what works well and what doesn’t is a first step in ensuring the peer respite is reaching its goals and objectives.

    Finally, by documenting the impact of your peer respite, you have a chance to contribute to the evidence base – research and results that show peer respites have a positive impact on peoples’ lives and on the communities in which they operate. As peer respites continue to expand throughout the country, there is an increasing need to demonstrate their impact. Information that shows the effectiveness of peer respites can help ensure that programs like this receive ongoing funding. This information also helps to make a case for opening new peer respites.

    Back to Introduction

  • What’s New in this Version of the Toolkit

    In 2014, Live & Learn and Human Services Research Institute, with support from the National Empowerment Center, published the Toolkit for Evaluating Peer Respites. Through our consulting and research since then, we found that programs, governments, and advocates would benefit from a revision to the Toolkit. Specifically, this updated version focuses on concrete, actionable recommendations on “best practices” in self-evaluation (or other low-cost/low-resource approaches).

    Whereas the 2014 Toolkit explored a variety of options for formal and informal evaluation of peer respites, this version is focused on establishing a shared framework for self-evaluation that can be used by peer respite staff on an ongoing basis without extensive hands-on involvement of researchers. We advocate for a shared framework because consistency in measurement across peer respites helps build stronger evidence for their real-world effectiveness!

    Back to Introduction


  • Key Considerations for Evaluation: Dos, Don’ts, and Ethics

    This toolkit is not meant to be a comprehensive how-to guide for evaluation. Rather, it is meant to provide a quick overview of essential information for conducting simple evaluations of peer respites. This section outlines a few basic pointers to keep in mind as you put your evaluation together.

    Further along in this toolkit we discuss Ethical Considerations for evaluation in depth. Even though not a formal “step” in the evaluation process, we strongly encourage you to review these – and keep ethics in mind – at every stage of evaluation.


  • Step 1: Planning and Preparation

    The kind of data you collect – and how you collect and analyze it – depends on what you want to know about your peer respite.

  • Laying Out Goals

    The very first step is to clearly state your program’s goals. By doing so, you define what the program is meant to be doing and how it could be improved. Peer respites’ goals are wide-ranging and include fostering recovery and empowering the guest, promoting community participation and togetherness, and supporting guests to make choices.

    Some goals are related to outcomes (such as improving guests’ lives), and others might be related to program activities (such as providing high-quality support or reaching underrepresented groups).

    Below are a set of core goals that are common to peer respites around the country:

    • Provide recovery-oriented services
    • Offer high-quality peer support
    • Create a safe and welcoming environment
    • Ensure the people who use the respite are representative of the community in terms of race, ethnicity, culture, age, gender identity, sexual orientation, etc.
    • Connect people with useful resources after leaving respite
    • Promote stronger grassroots advocacy and a more recovery-oriented mental health system
    • Enhance self-sufficiency, engagement in self-advocacy, activation, social connectedness, physical and mental health, and quality of life
    • Reduce or avoid use of psychiatric emergency services and inpatient hospitalization

    Back to Step 1

  • Creating Your Logic Model

    A logic model (an easily understood way of thinking about something) lays the groundwork for any evaluation. It should spell out your resources, activities, and anticipated outcomes (or, desired changes) based on your program’s goals, along with the resources and processes needed to meet these goals.

    The logic model will help you decide what you want to measure. Below is a suggested logic model for peer respites. We started filling it in to give you an idea of some sample content. You can print it out and fill in your own ideas by downloading the Sample Logic Model in Word here.

    As in the sample below, we suggest measuring outcomes at a variety of levels – to capture all the different players who may be affected by your program: the guest, the staff, the program itself, the mental health system, or the community.


    Here are some things to think about when you're working on your logic model:

    • If we get it right, what will this program look like 10 or 20 years from now?
    • What are the outcomes for which we want to be held accountable?
    • What will change as a result of the peer respite, and how will it change?
    • What activities must we undertake to achieve measurable results?

    Blank Logic Model
    Example Logic Model

    Back to Step 1


  • Step 2: Gathering Data

    This section walks you through key activities involved in identifying data collection tools and methods and collecting the data.

  • Timing and Measures

    Some instruments that have been used or are recommended for use by peer respites include guest surveys as well as instruments that examine processes of peer support and changes in service use.

  • When to Collect Information from Guests

    Typically, program evaluators conduct surveys of people participating in a program (in this case peer respite guests) at three or more time points:

    • When the person first enters the respite (the baseline survey). This survey is meant to show how the person is doing before they receive any support. The baseline survey usually takes place at the peer respite itself, but it is very important that it be given before or at the very start of a person’s stay; otherwise, it might reflect the person’s experiences at the respite, which could lead you to underestimate the program’s impact.
    • Just before or after the person leaves the respite (the exit survey). This survey is meant to show how the person is doing directly after receiving support at the peer respite. It is also a good time to ask questions about the person’s experience at the respite. The exit survey is given just before or after the person leaves the peer respite. Like the baseline survey, it usually takes place at the peer respite, but it could also take place in the community within 48 hours of the person leaving.
    • After some time has passed since the person was at the respite – two weeks, one month, or three months after (the follow-up survey). This survey is meant to measure the longer-term impact of the peer respite program on the person’s well-being and service needs. Typically, the follow-up survey is given in the community rather than at the peer respite facility, but it may also take place over the phone. (See the section on Maximizing response rates , below, for some important considerations for planning and conducting follow-up surveys.)
    Survey Type When to Meet What to Ask About
    Baseline Within 24 hours of when the person enters the peer respite Demographics
    Guest Outcome
    Measures
    Service Utilization
    Exit Just before leaving the peer respite or within 48 hours of them leaving Guest Outcome
    Measures
    Program Experience
    Peer Support
    Guest Contact Information
    Follow-Up Two weeks, one month, or three months after the person leaves the peer respite Guest Outcome
    Measures
    Service Utilization

    Back to Timing and Measures

  • Guest Demographics and Identifiers

    No matter which instruments you choose to use, you will always want to be sure to collect basic demographic information, including race/ethnicity, age, and gender. Typically, you only need to ask a person these questions once, usually during the first interview.

    You will also need to create a unique identifying number for each guest so that you can compare their responses before and after using the peer respite. By using a number that is unique to your evaluation, you can protect guest confidentiality . You will also want to keep track of what kind of survey it is (baseline, discharge, or follow-up).

    Here is a simple form you can use to gather demographic information and identifiers.

    Back to Timing and Measures

  • Guest Outcomes and Program Experience
    Recovery and wellbeing are highly individual. However, there are many widely-used survey measures that can reflect important outcomes such as quality of life, housing stability, and the development of social relationships and natural supports. A peer respite’s focus is explicitly non-clinical, but it is possible that there may be measurable improvements in clinical domains such as mental-health related functioning (and your funder may require that you measure these things).

    Here are some suggested data collection instruments that are frequently used in mental health services research. You can download each by clicking on the instrument’s name.

    Instrument Description Translations Number of questions More information
    Short Form Health Survey A set of widely used health status measures for routine assessment of care outcomes. Available in over 170 languages 36 Rand 36-Item Short Form Health Survey
    Empowerment Scale A scale measuring perceptions of choice related to events, assurance of living conditions, and likelihood of good or bad things happening. Available in English, Swedish, Japanese, Dutch, Portuguese 20 Naric Making Decisions Empowerment Scale
    Sense of Community Index (SCI) Sense of community with four elements: membership, influence, meeting needs, and shared emotional connection. Available in English, French, Spanish, and Portuguese 12 Sense of Community Measures
    Sense of Community Index-Disability A version of the SCI specifically adapted for people labeled with mental disorders English only 11 Sense of Community Measure for People with Mental Illness
    World Health Organization Quality of Life (WHOQOL)-BREF A widely used cross-cultural quality of life measure that addresses physical and psychological health, social relationships, and environment Available in over 20 languages 26 World Health Organization Quality of Life (WHOQOL)-BREF

    If you would like psychometric information on any of these measures, please click here. This information may be useful if you are writing a grant application or when reporting results.

    Many peer respites have developed their own survey measures, which may be useful to you and promote consistency in measurement. Here is a list with downloadable links:

    • 2nd Story Anonymous Guest Feedback Survey
    • Afiya Peer Respite Impact Survey
    • Georgia Peer Support and Wellness Center Feedback Form
    • Rose House Survey
    • Wisconsin Peer Run Respite Arrival Survey (The Departure and Follow-Up Surveys are located in the Resouces Section)

    You should be sure to use the same instruments at baseline, exit, and follow-up so that you can document changes over time.

    Back to Timing and Measures

  • Peer Support

    Measuring the process of peer support is important in demonstrating that peer respites are fundamentally different than other crisis services. It can also help you learn more about how peer support is being provided in your respite, which in turn can be used to help describe respite activities and areas of peer support strength that can be touted to funders and other stakeholders. Just as importantly, it can help you understand areas where staff training might be needed.

    Information about the process of peer support can also help you interpret the outcomes you are seeing in your evaluation; for example, you may find that the degree of change for guests is related to the type and amount of peer support they received.

    Many peer respites nationwide use Intentional Peer Support (IPS), a trauma-informed model that emphasizes holistic wellness and personal growth within the context of healing relationships. The IPS Core Competencies measure can be used to measure how guests experience peer support in your respite, even if the staff are not trained in IPS.

    Download the IPS Core Competencies Measure here.

    Back to Timing and Measures

  • Service Utilization & Cost

    Public systems have a strong cost emphasis. Policymakers frequently want to see cost savings from new programs (or at least costs that are on par with existing programs) to justify the investment. Peer respites may save money by preventing expensive psychiatric emergency service and inpatient use. Funders are always interested in program costs – and especially cost savings – particularly if they want to replicate the program in other areas.

    One way to understand your peer respite’s impact on cost is to understand whether people use it instead of other services, like emergency rooms or inpatient hospitalization. If you notice that people are using fewer inpatient and emergency services after the respite compared to before, you can make a case that the peer respite may be cost-effective. Here’s a basic set of questions you might use to ask guests about the services they used before and after their respite stay (at baseline and follow-up).

    Because of the complex processes around understanding whether (or why) an individual uses the peer respite vs. other acute or emergency services, it is not always accurate to compare the cost of a peer respite day to the cost of a hospital day in a budget or billing statement. There are also other factors to consider. For example, people may use peer respites differently than they use other crisis services, and they may use a combination of peer respite services and other inpatient or emergency services depending on their situation.

    Cost and cost-effectiveness research and analysis is a specialized type of research. You may need access to confidential or sensitive data from your local public health system. If you want to demonstrate costs and outcomes in this kind of relationship, it is advisable that you consult with an expert in these methods.

    Back to Timing and Measures

  • Considerations for Collecting the Data

    This section covers different considerations for collecting your data. Many of these considerations involve trade-offs between ease of collection and data quality. Your decisions should ultimately be made based on the resources you are able to commit to the evaluation (time and funding). The goal is as strong an evaluation as possible given the available resources.

  • Open-ended vs. Closed-ended Questions

    Open-ended questions allow guests more freedom in answering, which can help you uncover or identify information and topics you might not have thought about. But answers to open-ended questions can be more difficult to analyze, because of the amount of time it can take to interpret and organize the responses.

    Closed-ended questions are easier to analyze in large quantities. But they limit the type and amount of information that guests can provide as the information must fit into the predetermined response categories.

    We discuss how to work with both open-ended and close-ended data in the next section.

    Back to Considerations for Collecting Data

  • Self-administered Questionnaires vs. Using an Interviewer

    A self-administered questionnaire is a survey instrument that has been designed specifically to be completed by a guest without an interviewer reading the questions and marking their responses. Questionnaires can be printed, or they can be administered on a computer or website like SurveyMonkey. If questionnaires are collected on paper, the guest may have the choice to return the completed questionnaire by mail (usually with a postage-paid envelope) or deposit it at a secure location at the peer respite (such as a locked box).

    Guests may be more likely to report sensitive or personal information in a self-administered format than in an in-person meeting. Therefore, this might be a preferred method for data collection with guests to protect anonymity. During in-person meetings, some guests may want to present themselves in the best possible light and make a good impression on the interviewer by being “agreeable.” Self-administered questionnaires, therefore, might result in more honest feedback, particularly if the interviewers are staff members.

    There are also drawbacks for self-administered questionnaires. They require higher levels of literacy, and web surveys require internet access. There may also be lower response rates without an interviewer. In-person surveys may be more appropriate if you want to ask guests more complicated questions, collect open-ended and in-depth data about guest experiences, or ensure more complete responses.

    Back to Considerations for Collecting Data

  • Face-to-face vs. Telephone

    Face-to-face surveys only require that the guest and interviewer speak the same language, and have basic verbal and listening skills. No literacy is required. A personable and conscientious interviewer can increase response rates, maintain motivation with longer questionnaires, follow-up about responses, clarify questions, and help guests remember their experiences.

    Although telephone surveys make demands on a person’s listening and require that they have access to a telephone, these types of surveys are less resource-intensive for peer respites. Because you do not have to travel around the community to meet with people, you may be able to reach more former guests, particularly for follow-up surveys after guests have left the peer respite.

    Back to Considerations for Collecting Data

  • Maximizing Response Rates

    It is important that as many guests as possible participate in the evaluation. The more guests that participate, the more confident you can be that the results reflect all guests’ experiences and not just those surveyed. The number of guests who participate in a survey divided by the total number of guests who use the respite is called the response rate.

    For baseline and exit surveys, it will be easiest to ask guests to participate while they are at the peer respite. However, there may be instances where it is not appropriate for a guest to participate while at the respite, so you may want to arrange for them to take a survey before or after their stay.

    Once a person has left the respite, it can be a challenge to get back in touch with them to complete a follow-up survey. A simple form that collects information about how best to keep in touch is essential if you want to conduct follow-up surveys. You can download a sample Peer Respite Guest Contact Form here.

    The longer the period between when the guest leaves the respite and the follow-up, the more difficult it might be to connect. If you are only able to survey a small number of former guests, the information you collect may not be useful because the group you were able to follow-up with may be different than the people who did not stay in touch. In addition, conducting follow-up surveys can take a significant amount of staff time.

    Below are some tips for increasing participation in follow-up surveys:


    Back to Considerations for Collecting Data

  • Peer Interviewers

    If you have the resources, you may want to consider hiring paid or volunteer interviewers who have lived experience of the mental health system – peer interviewers – to survey guests. The shared lived experience can help guests feel more comfortable participating in the survey and providing answers. This could lead to a higher proportion of guests who respond to the survey – that is, a higher response rate.

    Peer interviewers should be focused on mutuality, connection and respect. They can meet with guests regularly to educate them about the study, explain and ensure informed consent, and administer surveys. The job requires training and orientation to the project, coordination with peer respite staff, and regular contact with anyone else working on the evaluation.

    In addition to lived experience with mental health issues or services, peer interviewers should also have a combination of experience and training or education in health services research or a related field. They should also have the interpersonal skills needed to establish and maintain effective working relationships with diverse groups of people. The job description can require that peer interviewers have knowledge of research methods, data collection, and program evaluation. However, you will also want to offer a basic overview of these topics in the orientation training. You can also provide peer interviewers ongoing support to ensure they understand how their activities fit within the overall research process.

    To reduce bias in data collection, a peer interviewer role should be separate from a peer supporter role. If peer interviewers were to hold dual roles as peer supporters, it could create a conflict of interest that could lead to peer interviewers intentionally or unintentionally introducing bias into the research process. For example, guests might not feel comfortable reporting negative outcomes or dissatisfaction with the program to individuals they had worked with as peer support staff.

    If you’re interested in knowing more about peer interviewers, here is an article describing how the peer interviewer process worked at one peer respite.

    Back to Considerations for Collecting Data

  • Survey Incentives

    You do not necessarily have to provide financial incentives for guests to participate in data collection, and you may not have the resources to do so. Luckily, many people are attracted to research and want to participate for other reasons, including wanting to contribute to science and to see programs improve, succeed, and spread.

    However, payments to guests – referred to as "incentives" or "stipends" – are frequently used to encourage survey participation. These incentives improve the likelihood that a person will participate, and they express appreciation for the guests’ time and attention. There are several ways to pay incentives:

    1. Guests are paid cash after they complete an interview or survey.
    2. Guests are given a gift card after they complete an interview or survey.
    3. Guests are entered into a “lottery” where some randomly receive a payment for their participation (either cash or gift card).
    4. Guests are given either cash or gift card before they complete an interview or survey to motivate participation (more frequently used with mail or Web data collection).

    In a basic peer respite evaluation that involves a brief guest survey, these stipends can range between $5 and $20, depending on the time it takes to complete the survey, whether the guest traveled to participate, and the type of survey being conducted (sometimes guests are offered larger incentives for follow-up surveys to promote participation).

    There may be concerns that these types of payments could be coercive – in a sense, strong-arming guests to participate – especially for those who have limited financial resources. However, financial incentives demonstrate respect by recognizing guests’ contributions to science and their time and effort.

    Back to Considerations for Collecting Data


  • Step 3: Working with the Data

    How you work with the data depends on the types of data you collected and how you collected it. This section covers some basic information on how to work with close-ended and open-ended data. It also describes some resources that might help you analyze the data you collect.

    Here are some examples of how peer respites work with data using some of the goals and measures we discuss in this guide:

    Goal Data Source Analytic Approach
    Offer high-quality peer support IPS Core Competencies measure Compare average scores over time
    Ensure the people who use the respite are representative of the community in terms of race, ethnicity, culture, age, gender identity, sexual orientation, etc. Guest demographics Compare guest demographics to the demographics of the population in your area (find demographics for your area at https://www.census.gov/data.html) or demographics of the target population (for example, demographics of people served by the County or State mental health agency)
    Enhance self-sufficiency, engagement in self-advocacy, social connectedness, physical and mental health, and quality of life Guest survey Compare overall survey scores and responses to individual items at baseline, exit, and follow-up
    Reduce or avoid use of psychiatric emergency services and inpatient hospitalization Guest survey and local service utilization data (if available) Compare guest responses related to psychiatric emergency services and inpatient hospitalization at baseline and follow-up. You may also work with your local mental health authority to examine rates of service use at hospitals and other facilities before and after peer respite use

    Back to Step 3

  • Entering Data

    After you have collected survey data, you will need to put it into a format that can be analyzed. For close-ended data, this means converting answers to surveys into numbers. Most scales – including the scales we recommend in this toolkit – include this information. For example, the Sense of Community Index instructs you to code answers as follows: True=1, False=0. Scales ranging from 1 to 5 (or 1 to some other number) are also common. These are called “Likert Scales”.

    The numbers can be entered into a simple spreadsheet to create a database that you can then use to analyze the data. Microsoft Excel is commonly used for this purpose, or you could use a free program such as Google Sheets or Open Office.

    Typically, databases have the names of each survey question in columns along the top row, and each survey response is entered as a row. We created an example template for the Sense of Community Index here.

    Open-ended data can also be entered into a spreadsheet in the same format, with wider columns to make space for larger amounts of text. Alternatively, you could type survey responses into a text document (Microsoft Word, Google Docs, or Open Office) and organize them there. When you’re typing written survey responses, be sure to type the responses exactly as they are written so you can be sure you are preserving the person’s intended meaning. If you can’t read a person’s writing, you can indicate that in the document using brackets or notes.

    Back to Step 3

  • Analyzing Closed-Ended Data

    Once you have entered closed-ended survey responses into a spreadsheet, you can use the basic spreadsheet functions to analyze the data.

    Scoring Data
    Most surveys – including the surveys we recommend in this toolkit – come with scoring information that is a first step in analyzing data. Typically this involves adding up the responses to create a total scale. This scale is then used to compare responses between guests or for the same guest over time. You may also be able to add up responses to particular questions to generate a sub-scale that tells you about a particular aspect of what you are trying to measure. For example, the Sense of Community index generates a total score (the sum of the answers from questions 1 to 12) as well as four sub-scales: Membership, Influence, Reinforcement of Needs, and Shared Emotional Connection. These sub-scales are the sum of a subset of questions and can tell you about particular aspects of community connectedness.

    Generating Summary Statistics
    For surveys that involve scales and sub-scales, you will want to create summary statistics such as the minimum, maximum, and mean (average). You can also compare responses from the same guest to see if scores change after staying at the peer respite.

    You can use spreadsheets to calculate summary statistics. There are many tutorials and resources available online that provide step-by-step guidance and tools, depending on the type of spreadsheet you are using.

    If you are using a survey software (like SurveyMonkey), you can also create simple summaries within the web browser. You can also download your data in a spreadsheet format and work with it yourself.


    Back to Step 3

  • Analyzing Open-Ended Data

    Open-ended survey questions can be very useful for flagging issues or concepts that you might not have covered in your closed-ended survey questions. They are an opportunity for guests to provide a range of feedback and information through the survey process. Open-ended responses can help you identify new and different ways of thinking about topics. For these same reasons, working with open-ended data can be challenging.

    One way to organize open-ended responses is to sort them into themes – or common threads among different responses. How you determine your themes depends on the original question and what you want to do with the data. Take the following scenario as an example:

    You want to know what aspects of the peer respite are important to guests, and you have a survey question that reads: “What did you like best about the peer respite?”

    You received ten responses:

    1. Getting to know Shari [another guest]
    2. Meeting new friends
    3. Meditation group
    4. Having dinner with staff and guests each night
    5. Vegetarian options
    6. Taking a break
    7. Restful
    8. WRAP group
    9. Meditating with Tyler [staff person who leads the meditation group]
    10. The groups

    You might divide the responses into the following themes:

    • Connecting with others (1, 2, 4)
    • Groups (3, 8, 9, 10)
    • Meditation (9, 3)
    • Food (4, 5)
    • Rest/taking a break (6, 7)

    Note that some responses were included in multiple themes. For example, “Having dinner with staff and guests each night” – was grouped into two themes: “Connecting with others” and “Groups”. You may also create an “Other” theme that encompasses responses that are difficult to group with others.

    Once you have created a list of themes, you can count them to see how many times an issue or concept came up. This then gives a general sense of how guests responded to a question in a particular way.

    Although counts can be useful to see how most people feel about the peer respite, it may be that only one guest responded to a question in a way that you feel is particularly important. These responses could be pulled out – or even quoted – to highlight one person’s unique experience. If you use quotes, however, please be sure that the person cannot be identified by their response.

    Back to Step 3

  • Outsourcing Your Analysis

    Oftentimes, students at local colleges or universities are required to conduct data analysis as part of their coursework, or for a thesis or practicum. For students interested in program evaluation, analyzing your data could be a worthwhile project. Before you hand over any program data, however, it is essential that the data do not have any information that could be used to identify a person. It is also important that the student have training in human subjects’ protection. You will also want to be sure to carefully review the students’ work for accuracy before reporting the results.

    Back to Step 3


  • Step 4: Reporting Evaluation Data

    Many programs collect and report data for a simple reason: the funder requires it. This is common for most peer respites. You might also be conducting an evaluation or monitoring data to keep the community informed, or to contribute to the evidence about peer respites (or any combination of these things). How you report your data will depend on why you’re collecting it in the first place.

    If your program’s funding is contingent on certain types of data reporting, it is important to respond to those requirements. However, data that gets reported to funders may or may not be the kind of information the community is interested in, or that you are interested in knowing about your program. And these data are often not shared with the public, so they don’t contribute to the evidence base if nobody knows the evaluation was conducted. One way to think about data reported to funders is that those basic efforts represent the “floor” that you can build upon to support additional evaluation goals. Even if results do not indicate that the program has been working the way that you hoped it would, the results can be used to support efforts to make it better.

    Back to Step 4

  • Sharing Results with the Public

    Providing members of the community – including guests, their families and friends, staff, other mental health system stakeholders, elected officials, and the public in general – with information about the respite promotes openness and transparency and may lead to greater community buy-in for your peer respite.

    By publicly sharing your findings, you are contributing to the evidence base for peer respites. It may be important to you to publish papers in peer-reviewed journals, share press releases with the media, or report to a larger stakeholder base (local or national advocates). You might want to present your results at a local or national conference. These forums each have different requirements for the types of information you present, the level of detail in describing your results, and the format of presentation.

    When possible, consider involving key stakeholders in the reporting process. This may involve sharing preliminary results with program staff, guests, local advocates, or others who have an interest in the peer respite. These individuals can review your work and comment on whether your work makes sense. They may also be able to offer alternative interpretations of the results. They can help identify things you may have overlooked or lend insights to complex findings. Make sure to build in time for stakeholder review to ensure that you are characterizing the program and its impact appropriately.

    However you decide to disseminate your results, it is important that you make them available in multiple formats so they are accessible to a variety of stakeholders— peers, advocates, providers, guests, and the general public. For example, if you create a technical report for a funder, you could also create a one-page summary or infographic that highlights the most important points using simple language that can be shared with the elected officials or the public. Producing materials in multiple formats increases the impact of your evaluation work by helping reach different audiences in different ways and ensures that all who contributed to the evaluation process can see the results.

    Back to Step 4


  • Ethical Considerations

    Although conducting self-evaluation of a peer respite means that community members have the power to decide what questions are explored and have control over collecting and reporting data, there are still several ethical issues to consider. Ultimately, you should use your judgment, and consult with trusted advisors when needed.

    It is very unlikely that you need IRB (a research ethics board) approval to conduct self-evaluation. However, you may be interested in taking the free online NIH certificate in human subjects, which will teach you about research ethics (and also provide you an official certificate): https://phrp.nihtraining.com/users/login.php

    Below we highlight some common ethical issues to watch for in your self-evaluation:

    1. Be sensitive in your plans to conduct data collection

      Evaluations and data collection techniques must be sensitive to program and guest values and the potential time and energy burden on guests and staff. The work of evaluation shouldn’t be greater than the value the programs gets from it. Data collection may be experienced as intrusive or present an undue burden to respite guests and staff. Just as the peer staff at respites work to ensure that their practices are reflective of the program mission, you should ensure that any evaluation-related activities are in concordance with the ethos of mutuality and shared power.


    2. Plan to use your data and make it meaningful

      If data are to be collected, they should be as meaningful as possible to inform ongoing program and community needs and resources, as well as helping others start or sustain peer respites. There is a whole spectrum of perspectives on what is important and why. You do not have to be committed to traditional or formal evaluation to be able to report something that is meaningful. While it might seem relatively easy to collect any data you may want, it is important to use data collection efficiently and consistently with your analysis plan to reduce burden on guests and to contain the costs of the project.


    3. Consider conflicts of interest between those working on the evaluation, and other staff and guests

      Individuals working on the evaluation need to balance honoring the confidentiality of the research guests with being a part of an endeavor that is inclusive and community-focused by definition. For example, although peer interviewers may technically be employees of the same agency, they should not attend regular team meetings of the peer supporters. This can be challenging, as team meetings at peer respites are often by nature open to all members of the community, and some interviewers will be interested in attending those meetings, due to personal relationships with people in the house (both staff members and guests).


    4. Follow standards for informed consent and lack of coercion

      Although you may not have to document formal informed consent by people you are collecting data from (i.e. having an Institutional Review Board-approved form that guests sign to acknowledge their rights) you should be aware of what typically goes into the informed consent process. All informed consent in research involving human subjects must include descriptions of:

      • What the project is about
      • Why the individual is eligible for the study
      • What risks, benefits, and alternatives are associated with the research
      • What rights they have as research participants.

      At minimum interviewers and others conducting any evaluation project must make it clear to guests that participation is voluntary. They have the right not to participate at all or to stop participating at any time. Deciding not to participate will not result in any loss of services or supports, or hurt their relationship with your peer respite. You may also want to have a protocol in place in case a person becomes uncomfortable or upset as a result of participating in the evaluation.


    5. Always maintain confidentiality and privacy of evaluation data

      Some may also feel that confidentiality and privacy could be potentially violated based on the kind of data collected. It is important to secure privacy no matter what information is documented, and to ensure that people are contributing data voluntarily. Throughout the toolkit we have discussed ways to maintain confidentiality. They are summarized them here:

      • Make sure all staff working on the evaluation have completed human subjects in research training. A basic, free, widely-used online training can be accessed here: https://phrp.nihtraining.com/users/login.php
      • Use a unique “Guest ID” rather than names or initials on all surveys. These ID numbers can be linked with a person’s name in a separate file that is password-protected and only key evaluation staff have access to
      • If using peer interviewers, work to minimize conflict of interest and keep interviewer and staff roles separate
      • If staff are participating in data collection, do not discuss a guests’ participation in the evaluation or responses to survey questions with other staff
      • After data have been entered into a spreadsheet or organized into a file, check to make sure no identifying information has been retained. This is particularly important if you’re outsourcing your analysis to someone outside the program
      • When reporting results, take care that individual guests cannot be identified based on responses

    Back to Ethical Considerations


  • Acknowledgements

    The Guidebook for Peer Respite Self-Evaluation: Practical Steps and Tools was created by Laysha Ostrow, PhD of Live & Learn, Inc. and Bevin Croft, PhD of Human Services Research Institute.

    Contact: Laysha at laysha@livelearninc.net or Bevin at bcroft@hsri.org

    We would like to thank everyone who contributed to the creation of this guidebook, including Carina Smith of Live & Learn, Inc. and Tori Morrison of California Polytechnic State University, San Luis Obispo, and the reviewers:

    • Faith Boersma, State of Wisconsin Department of Health Services
    • Michael Lane, Consumers Self Help Center
    • Sera Davidow, Western Massachusetts Recovery Learning Community
    • Adrian Camp, 2nd Story Peer Respite
    • Elizabeth Siantz, University of California, San Diego
    • Yana Jacobs, Foundation for Excellence in Mental Health Care
    • Danny van Leeuwen, Health Hats
    • Ben Cichocki, Human Services Research Institute
    • Virginia Mulkern, Human Services Research Institute

    This project was funded in part under a contract with the National Empowerment Center.