Program evaluation in Social work: Reliability, validity and sensitivity

Contents

  1. Introduction
  2. Reliability
  3. Validity
  4. Sensitivity
  5. Steps to program evaluation framework
  6. Planning a program evaluation

Introduction

It is critical to ensure that the instruments used in programme evaluation (for example, tests, questionnaires, etc.) are as reliable, valid, and sensitive as possible.
According to Rossi et al. 'a measure that is poorly chosen or poorly conceived can completely undermine the worth of an impact assessment by producing misleading estimates. Only if outcome measures are valid, reliable and appropriately sensitive can impact assessments be regarded as credible'.

Reliability

The 'extent to which the measure produces the same results when used repeatedly to measure the same thing' is the reliability of a measurement instrument (Rossi et al). The statistical power of a measure and the credibility of its findings increase as it becomes more reliable. If a measuring instrument is unreliable, it can dilute and obscure the true effects of a programme, making it 'appear less effective than it is' (Rossi et al., 2004) As a result, it's critical to make sure the evaluation is as accurate as possible.

Validity

A measurement instrument's validity is defined as "the extent to which it measures what it is intended to measure" (Rossi et al., 2004,). This concept can be difficult to quantify: in general, an instrument is considered valid if it is accepted as valid by the stakeholders (stakeholders may include, for example, funders, programme administrators, et cetera).

Sensitivity


The main goal of the evaluation process is to see if the programme is having an impact on the social problem it is trying to solve; as a result, the measurement instrument must be sensitive enough to detect these potential changes (Rossi et al.). A measurement instrument may be insensitive if it contains items that measure outcomes that the programme could not possibly affect, or if it was designed for individual use (for example, standardised psychological measures) rather than group use (Rossi et al., 2004). These factors could create 'noise,' masking any effect the programme might have had.

Only measures that meet the criteria for reliability, validity, and sensitivity are considered credible evaluations. Evaluators must produce credible evaluations because their findings may have far-reaching consequences. A shady evaluation that fails to show that a programme is achieving its goal when it is in fact bringing about positive change could result in the programme being denied funding.

Steps to program evaluation framework

A complete programme evaluation follows six steps, according to the Centers for Disease Control and Prevention (CDC). Engage stakeholders, describe the programme, focus the evaluation design, gather credible evidence, justify conclusions, and ensure use and sharing of lessons learned are among the steps outlined. These steps can be organised into a cycle to represent the ongoing evaluation process.

Evaluating collective impact

Though the processes described here are appropriate for most programmes, highly complex non-linear initiatives, such as those using the collective impact (CI) model, require a dynamic evaluation approach. Collective impact is defined as "the commitment of a group of important actors from various sectors to a common agenda for addressing a specific social problem," and it usually consists of three stages, each with a different recommended evaluation approach:
  • Early stages: CI participants are pondering possible strategies and formulating action plans. Uncertainty is prevalent.
    Developmental evaluation is recommended to help CI partners understand the context of the initiative and its evolution: "Developmental evaluation involves real time feedback about what is emerging in complex dynamic systems as innovators seek to bring about systems change."
  • Middle stage: CI partners put agreed-upon strategies into action. Some outcomes become more predictable.
    Formative evaluation to refine and improve progress, as well as ongoing developmental evaluation to explore new elements as they emerge, is the recommended evaluation approach. "careful monitoring of processes in order to respond to emergent properties and any unexpected outcomes."
  • Later stage: Activities are no longer in formation and have achieved stability. Knowledge of which activities may be effective is informed by experience.
    Methodology for evaluation suggested: Summative evaluation "uses both quantitative and qualitative methods in order to get a better understanding of what [the] project has achieved, and how or why this has occurred."

Planning a program evaluation

Focusing the evaluation, collecting the information, using the information, and managing the evaluation are the four parts of planning a programme evaluation. Program evaluation entails thinking about the purpose of the evaluation, what questions should be asked, and what will be done with the data gathered. Consider the following critical questions:
  • What am I going to evaluate? 
  • What is the purpose of this evaluation? 
  • Who will use this evaluation? 
  • How will they use it? 
  • What questions is this evaluation seeking to answer? 
  • What information do I need to answer the questions? 
  • When is the evaluation needed? What resources do I need? 
  • How will I collect the data I need? 
  • How will data be analyzed? 
  • What is my implementation timeline? 









































Comments

Thank You