Program evaluation in Social work: Conducting an evaluation

 Contents

  1. Introduction
  2. Conducting an evaluation
  3. Assessing needs
  4. Assessing program theory
  5. Assessing implementation
  6. Assessing the impact (effectiveness)
  7. Assessing efficiency

Introduction

A systematic method for collecting, analysing, and using data to answer questions about projects, policies, and programmes, particularly about their effectiveness and efficiency, is known as programme evaluation. Stakeholders in both the public and private sectors frequently want to know if the programmes they fund, implement, vote for, receive, or object to are having the desired effect. While this is the primary focus of programme evaluation, other important factors to consider include how much the programme costs per participant, how it could be improved, whether the programme is worthwhile, whether there are better alternatives, whether there are unintended consequences, and whether the programme goals are appropriate and useful. Evaluators can assist in answering these questions, but the best way to do so is for evaluators and stakeholders to collaborate on the evaluation.

The evaluation process is thought to be a relatively new phenomenon. Planned social evaluation, on the other hand, has been documented as far back as 2200 BC. In the 1960s, during the Kennedy and Johnson administrations' Great Society social programmes, evaluation became especially important in the United States. Huge sums of money were poured into social programmes, but the results of these investments were largely unknown.

Both quantitative and qualitative social research methods can be used in programme evaluations. People who evaluate programmes have a variety of backgrounds, including sociology, psychology, economics, social work, and public policy. Some graduate schools offer specialised training in programme evaluation.

Conducting an evaluation

Program evaluation can happen at any time during the program's lifespan. Each of these stages poses different questions for the evaluator to address, necessitating different evaluation approaches. According to Rossi, Lipsey, and Freeman (2004), the following types of assessments may be appropriate at these various stages:
  • Assessment of the need for the program
  • Assessment of program design and logic/theory 
  • Assessment of how the program is being implemented (i.e., is it being implemented according to plan? Are the program's processes maximizing possible outcomes?) 
  • Assessment of the program's outcome or impact (i.e., what it has actually achieved) 
  • Assessment of the program's cost and efficiency 

Assessing needs

A needs assessment looks at the population that the programme is intended to serve to see if the need as envisioned in the programme exists in the population, if it is a problem, and, if so, how it should be addressed. This includes determining and diagnosing the actual problem that the programme is attempting to solve, as well as who or what is affected by the problem, how widespread the problem is, and what the problem's measurable effects are. For example, a programme evaluator for a housing programme aimed at reducing homelessness might want to know how many people are homeless in a given area and what their demographics are. Rossi, Lipsey, and Freeman (2004) warn against launching an intervention without first determining whether it is needed, as this could waste a lot of money if the need isn't there or is misconceived.

A programme evaluator's primary responsibility is to: First, construct a precise definition of the problem. Evaluators must first determine the problem or need. This is best accomplished by bringing together all possible stakeholders, such as the community affected by the potential problem, the agents/actors working to address and resolve the problem, funders, and so on. Including buy-in early in the process reduces the risk of later pushback, miscommunication, and incomplete data.

Second, determine the scope of the issue. After determining the nature of the problem, evaluators must determine the scope of the problem. They must respond to the questions of "where" and "how big." Evaluators must determine the location and scope of the problem. It's much easier to say there's a problem than it is to say where it is and how widespread it is. A person identifying some battered children may be enough evidence to persuade one that child abuse exists, according to Rossi, Lipsey, and Freeman (2004). However, determining how many children it affects and where it is geographically and socially would necessitate knowledge of abused children, perpetrator characteristics, and the problem's impact across the political authority in question.

This can be difficult given that child abuse is not a public behaviour and that estimates of rates on private behaviour are usually impossible due to factors such as unreported cases. To estimate incidence rates, evaluators would have to use data from multiple sources and employ a variety of methods. There are two additional questions to be answered: Evaluators must also respond to the "how" and "what" questions. Evaluators must determine how the need will be met in order to answer the 'how' question. After identifying the need and becoming acquainted with the community, evaluators should conduct a performance analysis to determine whether the program's proposed plan will be able to meet the need. Evaluators must conduct a task analysis to determine the best way to perform in order to answer the 'what' question. For example, whether an organisation sets job performance standards or whether some governmental rules must be considered when performing the task.

Third, define and identify the intervention's target population, as well as the nature of that population's service needs. It's critical to understand the target population, which could include individuals, groups, or communities. The population is divided into three categories: population at risk, population in need, and population in demand.
  • Population at risk: are people with a significant probability of developing the risk e.g. the population at risk for birth control programs are women of child-bearing age. 
  • Population in need: are people with the condition that the program seeks to address; e.g. the population in need for a program that aims to provide ARV's to HIV positive people are people that are HIV positive.
  • Population in demand: that part of the population in need that agrees to be having the need and are willing to take part in what the program has to offer e.g. not all HIV positive people will be willing to take ARV's
Knowing what/who the target is will aid in establishing appropriate boundaries, allowing interventions to properly address the target population while remaining feasible to implement.

There are four steps in conducting a needs assessment:
  1. Perform a ‘gap’ analyses
    Evaluators need to compare current situation to the desired or necessary situation. The difference or the gap between the two situations will help identify the need, purpose and aims of the program.
  2. Identify priorities and importance
    In the first step above, evaluators would have identified a number of interventions that could potentially address the need e.g. training and development, organization development etc. These must now be examined in view of their significance to the program's goals and constraints. This must be done by considering the following factors: cost effectiveness (consider the budget of the program, assess cost/benefit ratio), executive pressure (whether top management expects a solution) and population (whether many key people are involved). 
  3. Identify causes of performance problems and/or opportunities
    When the needs have been prioritized the next step is to identify specific problem areas within the need to be addressed. And to also assess the skills of the people that will be carrying out the interventions. 
  4.  Identify possible solutions and growth opportunities
    Compare the consequences of the interventions if it was to be implemented or not.
Needs analysis is hence a very crucial step in evaluating programs because the effectiveness of a program cannot be assessed unless we know what the problem was in the first place

Assessing program theory

The programme theory, also known as a logic model, knowledge map, or impact pathway, is an implicit assumption in the way the programme is designed about how the program's actions are supposed to achieve the goals it sets out to achieve. This 'logic model,' which is often assumed rather than stated explicitly by programme managers, will require an evaluator to elicit from programme staff how the programme is supposed to achieve its goals and assess whether this logic is plausible. In an HIV prevention programme, for example, it may be assumed that educating people about HIV/AIDS transmission, risk, and safe sex practises will lead to safer sex. However, research in South Africa is increasingly showing that, despite increased education and knowledge, people still do not always engage in safe sex. As a result, the logic of a programme that relies on education to persuade people to use condoms could be flawed. This is why it is critical to read previous research in the field. Explaning this logic can also reveal both positive and negative unintended or unexpected consequences of a programme. The hypotheses for impact evaluation are driven by the programme theory. Developing a logic model can also help programme staff and stakeholders develop a shared understanding of what the programme is supposed to do and how it is supposed to do it, which is often lacking (see Participatory impact pathways analysis). Of course, evaluators may discover that the logic model behind a programme is either incompletely developed, internally contradictory, or (in the worst cases) essentially nonexistent during the process of trying to elicit it. This significantly reduces the effectiveness of the evaluation, though it does not necessarily mean that the programme is reduced or eliminated.

Making a logic model is a great way to visualise key aspects of a programme, especially when preparing for an evaluation. An evaluator should construct a logic model based on input from a variety of stakeholders. There are five major components to logic models: Short-term outcomes, medium-term outcomes, and long-term outcomes are all examples of resources or inputs. Developing a logic model clarifies the problem, the resources and capacity currently being used to address the problem, and the program's measurable outcomes. Examining the various components of a programme in relation to the overall short- and long-term objectives reveals any potential misalignments. Creating an actual logic model is critical because it clarifies the definition of the problem, the overarching goals, and the program's capacity and outputs for all stakeholders.

Rossi, Lipsey & Freeman (2004) suggest four approaches and procedures that can be used to assess the program theory. These approaches are discussed below.
  • Assessment in relation to social needs
    This entails evaluating the programme theory by relating it to the needs of the program's intended target population. Even if the programme theory is well implemented, it will be rendered ineffective if it fails to address the needs of the target population.
  • Assessment of logic and plausibility
    This type of evaluation entails having a panel of experts examine the logic and plausibility of the assumptions and expectations built into the program's design. The review process is unstructured and open-ended in order to address specific programme design issues. The following questions were suggested by Rutman (1980), Smith (1989), and Wholly (1994) to aid in the review process.
    > Are the program goals and objectives well defined?
    Are the program goals and objectives feasible?
    > Is the change process presumed in the program theory feasible?
    > Are the procedures for identifying members of the target population, delivering service to them, and sustaining that service through completion well defined and sufficient?  
    Are the constituent components, activities, and functions of the program well defined and sufficient?
    Are the resources allocated to the program and its various activities adequate?
  • Assessment through comparison with research and practice
    To assess various components of the programme theory, this type of assessment necessitates gathering data from research literature and existing practises. The evaluator can determine whether the programme theory is supported by research and practical experience with similar concepts in other programmes.
  • Assessment via preliminary observation
    This method involves incorporating firsthand observations into the assessment process to ensure that the programme theory and the programme itself are in agreement. The observations can be focused on the outcomes' attainability, the target population's circumstances, and the program's activities and supporting resources' plausibility.

    These different forms of assessment of program theory can be conducted to ensure that the program theory is sound

    An additional technique for assessing a programme theory based on its structure was described by Wright and Wallis (2019). This method, known as integrative propositional analysis (IPA), is based on research streams that found that theories with better structure were more likely to work as expected (in addition meaning and data). The first step in IPA is to identify the propositions (cause-and-effect statements) and create a visual diagram of them. The researcher then counts the number of concepts and their causal relationships (circles and arrows in the diagram) to determine the breadth and depth of knowledge reflected in the theory's structure. The number of concepts is used to determine breadth. This is based on the idea that real-world programmes have a lot of interconnected parts, so a theory that shows a larger number of concepts demonstrates a greater breadth of programme understanding. The percentage of concepts that are the result of more than one other concept is called depth. This is based on the notion that things in real-world programmes have multiple causes. As a result, a concept that is the result of more than one other concept in the theory indicates that the concept is better understood; a theory with a higher percentage of better-understood concepts indicates that the programme is better understood. 

Assessing implementation

Process analysis evaluates how the programme is implemented rather than the theory of what the programme is supposed to do. This assessment determines whether the components identified as critical to the program's success are being implemented. The evaluation determines whether target populations are reached, people are receiving the services they expect, and staff are properly trained. Process evaluation is a continuous process that uses repeated measures to determine whether a programme is being implemented effectively. This is a particularly serious issue because many innovations, particularly in areas like education and public policy, are made up of a series of relatively complex action chains. Process evaluation can be used in public health research, for example. Many of these elements rely on the correct implementation of other elements before they can work, and they will fail if the prior implementation is incorrect. During the 1980s, Gene V. Glass and others demonstrated this conclusively. Because incorrect or ineffective implementation will produce the same neutral or negative outcomes as correct implementation of a poor innovation, it is critical that evaluation research evaluate the implementation process itself. Otherwise, a good innovative idea could be mischaracterized as ineffective when it was simply never implemented as intended.

Assessing the impact (effectiveness)

The impact evaluation determines the causal effects of the program. This involves trying to measure if the program has achieved its intended outcomes, i.e. program outcomes.

Program outcomes 

The state of the target population or the social conditions that a programme is expected to improve is referred to as an outcome. The observed characteristics of the target population or social conditions, not the programme, are the programme outcomes. As a result, the concept of an outcome does not imply that the program's objectives have changed or that the programme has influenced them in any way.

There are two kinds of outcomes, namely outcome level and outcome change, also associated with program effect.
  • Outcome level refers to the status of an outcome at some point in time. 
  • Outcome change refers to the difference between outcome levels at different points in time. 
  • Program effect refers to that portion of an outcome change that can be attributed uniquely to a program as opposed to the influence of some other factor.

Measuring program outcomes

Outcome measurement is the process of representing the circumstances that define the outcome using observable indicators that change systematically as those circumstances change or differ. A systematic way to assess the extent to which a programme has achieved its intended outcomes is outcome measurement. Mouton (2009) defines measuring a program's impact as "demonstrating or estimating the accumulated differentiated proximate and emergent effect, some of which may be unintended and thus unforeseen."

Outcome measurement is used to determine whether or not a programme is effective. It also aids in the comprehension of a programme. The most important reason for making the effort is to learn about how the work affects the people who are being served. With the data gathered, it can be determined which activities should be continued and expanded upon, and which should be changed to improve the program's effectiveness.

This may entail employing advanced statistical techniques to assess the program's impact and determine a causal relationship between the programme and various outcomes.

Assessing efficiency

Finally, a cost-benefit or cost-efficiency analysis evaluates a program's effectiveness. Evaluators lay out the program's advantages and costs for comparison. The cost-benefit ratio of an efficient programme is lower. Static and dynamic efficiency are the two types of efficiency. While static efficiency is concerned with achieving goals at the lowest possible cost, dynamic efficiency is concerned with continuous improvement.

































































Comments

Thank You