Copyright © 2013 by Jackson de Carvalho. 136031-DECA ISBN: Softcover 978-1-4836-4052-5 Hardcover 978-1-4836-4053-2 Ebook 978-1-4836-4054-9
All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the copyright owner.
This is a work of fiction. Names, characters, places and incidents either are the product of the author’s imagination or are used fictitiously, and any resemblance to any actual persons, living or dead, events, or locales is entirely coincidental.
Rev. date: 07/02/2013
To order additional copies of this book, : Xlibris Corporation 1-888-795-4274 www.Xlibris.com
[email protected]
PROGRAM LOGIC FOR THE TWENTY FIRST CENTURY
A DEFINITIVE GUIDE
It is easier to arrive when you know where you’re going and how to get there.
JACKSON DE CARVALHO, PH.D
TABLE OF CONTENTS
ACKNOWLEDGEMENTS
BRIEF OVERVIEW OF CHAPTERS
PREFACE
CHAPTER I
Figure 1: United Way Generalized Model for Program Development
Definition of Key
CHAPTER II
A Brief History of Logic Model Program Framework
A Brief Historical Overview of Program Theory
Figure 2: Chen and Rossi’s Generalized Model for Program Development and Evaluation
Figure 3: Early logic model diagram
Table 1: Theory Approach Logic Model
Outcomes Approach Logic Model
Table 2: Outcomes Approach Logic Model
Activities Approach Logic Model
Table 3: Activities Approach Logic Model
Advantages of Using Logic Models
CHAPTER III
Stakeholder Involvement
Broad Definition of stakeholders.
Narrow Definitions of Stakeholders.
Primary and Secondary Stakeholders.
History of the Stakeholder Concept
Typical Key Stakeholders of a Program
Figure 4: Who are the Stakeholders
Table 4: Identifying Key Stakeholders
Table 5: What Matters to Stakeholders
The Firms Mission Statement and Stakeholders Interests
CHAPTER IV
Evaluation
Benefits of program evaluation
Brief Historical Overview of Evaluation
Evaluation types
Planning Evaluation
Formative Evaluation
Summative evaluations
Predictive Evaluation
Figure 5: Predictive Evaluation Points
Table 6: A Taxonomy of Major Evaluation Types
Figure 6: The CAI Design Model (Hannafin and Peck Design Model)
Figure 7: The Dick and Carey System Approach Model
Table 7: Pre-service and in-service teachers’ responses to hearing the word evaluation
Using Structural Equation Modeling for Program Evaluation
Structural Equation Evaluation Questions
Establishment of Evaluation Plan
Table 8: Evaluation Plan
Figure 8: The Evaluation System Approach Model
Table 9: Checklist for Feasibility and Quality of Evaluation Plan
CHAPTER V
Gathering Archival Data and Organizing Information
Program modeling decisions
Table 10: Checklist for Evaluation Plan
Welcome and Introductions
Stakeholder’s role clarification
Brief Background on program models
Group boundaries and expectations
Begin Developing the Program Logic Model
1. PROGRAM
2. SITUATION AND RATIONAL
Table 11: Competitors and Theirs Strengths.
3. INPUTS
Figure 9: Organizational Chart
Table 12: Program Costs.
Table 13: Program Revenue.
Table 14: Program Expenses.
Table 15: Readiness For Program Development and Implementation.
4. ACTIVITIES
5. OUTPUTS
6. OUTCOMES
Table 16: Readiness For Program Design
7. REFERENCES
Appendix A
Figure 10: Logic Model (example)
Example of a Logic Model & Narrative
Appendix B
Table 17: Work Plan Matrix.
Table 18: Complete Work Plan Matrix
10. Appendix C
10.1 Client Story
ACKNOWLEDGEMENTS
Although this book outlines the culmination of twenty years of program development experience it could not be completed without the expertise of many other professionals. I cannot individually thank all those who helped in the preparation of this publication that list would be long, and they already know our deep gratitude. I am extremely grateful for the many hours and great diversity of ideas contributed by these individuals. The author would like to both acknowledge and thank the many program designers, teachers, and evaluators who assisted in the development of this publication by sharing their experiences, expertise, and resource materials.
BRIEF OVERVIEW OF CHAPTERS
CHAPTER I - explains logic model as a program framework, describes the historical framework of logic models and make suggestions of how and when to use it. In addition, this chapter defines a few key and concepts used throughout the document.
CHAPTER II- describes a brief history of logic model program framework while exposing the reader to some of the first logical models based on theoretical framework to pave the way to data driven program development. Additionally, this chapter includes a brief historical overview of program theory for the purpose of elucidating the role of theories in the development of programs.
CHAPTER III - focuses on the clarification of the stakeholder concept including a discussion on the history, categories and identification of key stakeholders for a program. This chapter also includes worksheets to facilitate the identification and recruitment of stakeholders.
CHAPTER IV - reviews and describes the program evaluation process within the parameters of logic model. There is an exercise at the end of this chapter that organizations can use to assess the extent to which they have incorporated the logic model guiding principles and elements into their programs.
CHAPTER V - This chapter describes a program development step-by-step process following the logic model guiding principles and elements. Diagrams of actual projects illustrate the integration of Logic Model components at each stage of project development and demonstrate the link between key
elements of a program-integrated approach and project actions.
PREFACE
The increasing scarcity of global resources is forcing organizations to demonstrate greater effectiveness and efficiency of their program activities by conducting outcome-oriented evaluation of projects. The prevailing thought of most program evaluators is that when projects are designed within a logical framework targeting the inclusion of outcome measures it will facilitate the evaluation process and insures ultimate success. Logic model framework will accomplish this by forcing program designers to include evaluable measures to allow decision makers to notice potential problems sooner. As stated in the Kellogg Foundation (2004): “As you implement your program, outcome measures enhance program success by assessing your progress from the beginning and all along the way. That makes it possible to notice problems early on. The elements (Outputs, Outcomes, and Impact) that comprise your intended results give you an outline of what is most important to monitor and gauge to determine the effectiveness of your program. You can correct and revise based on your interpretation of the collected data” (p. 16).
Subsequently, this guide was developed to provide practical assistance to organizations to engage in this process. In the pages of this guide, we hope to give staff of nonprofits and community alike sufficient orientation to the underlying principles of “logic modeling” to use this tool to enhance their program planning, implementation, and dissemination activities.
This workbook was developed for the purpose of elucidating what is fast becoming the most popular framework for program design and evaluation and to serve as a comprehensive guide in the do-it-yourself program development approach. It describes the steps necessary for you to create logic models and Evaluation Matrix for your own programs. This process may take anywhere from a few hours to several days, depending on the complexity of the program being developed. At any rate, we hope this workbook represents a simple and practical tutorials and that you may use it in the way that works best for you:
As a stand-alone guide in the design of logic models for program and grant development
As a basic resource for program evaluation
As a supplement to a logic model training
on this workbook and suggestions for strengthening it are welcome, and should be addressed to: Dr. Jackson de Carvalho, Ph.D.:
[email protected]
CHAPTER I
A logic model is a commonly-used tool often expressed in the form of diagrams or visual schematics that has the power to communicate the goals of a project clearly and comprehensibly in a single framework or matrix. According to Greenfield, Williams and Eiseman (2006), “A logic model typically offers a simplified visual representation of the path of a program’s operations, starting with inputs and then progressing to the program’s activities, its outputs, its customers, and its intended outcomes. The model may also link the program’s operations, what the program actually does either alone or with others to fulfill its mission, to its strategy, which we define as the goals, management objectives, and performance measures that the program’s mission. Operations include resources, actors, and events, whereas strategy speaks of intentions” ( p. 21).
Sometimes, logical models are described as a logical framework, theory of change, or program matrix, but the purpose is usually the same: to graphically depict policies, programs, project or even the sum total of an organization’s intended results. The supremacy of a program modeling is highlighted by its ability of incorporating all the needs and views of the actors involved in the project and the idiosyncrasies of its environment. Additionally, a program framework serves as a foundation for program planning, implementation, evaluation, adjustments and replicability of best model fit to a specific data set or context.
Furthermore, logical models elucidates the main features of a project beginning with the design and identification (what is the problem?), definition (what should we do?), the planning (how do we do it?), the execution and monitoring (are we doing well?), until the evaluation (what have we achieved?). In other words, a logical model is a systematic manner to express understanding of the contextual relationships between resources available to operate a program, the activities it plans to carry out, and the changes or expected results. Programs designed in alignment with the logical model describe systematic and consistent linkages
between factors, programmatic inputs, outputs and outcomes as shown in the following diagram:
Figure 1: United Way Generalized Model for Program Development
It is noteworthy that logic models can come in all shapes and sizes; boxes with connecting lines that are read from left to right (or top to bottom); circular loops with arrows going in or out; or other visual metaphors and devices constructed to look like virtually any kind of schematic. Often, these different hypothesized diagrams are shown in a form of links in a chain of reasoning describing the components of a process in relationship towards the desired outcome (Trochim & William, 2006). An extensive review of the relevant literature showed that a 2004 publication from the W.K. Kellogg Foundation entitled Logic Model Development Guide, is one of the most comprehensive and clear sources of information related to program development. The Kellogg Foundation (2004) is essentially an examination of logic models useful to corporations with a particular focus on models utilized by foundations.
The Kellogg Logic Model Development Guide suggests that before beginning the actual design of a program using the logic model approach is best to first develop an outcome statement and subsequently allow it to drive the whole program deg process. The rational of first identifying the desired results of a program is farther elucidated by the following quotation from the Kellogg Foundation (2004): “Do the outcomes first’ is sage advice. Most logic models lack specific short- and long-term outcomes that predict what will be achieved several years down the road. Specifying program milestones as you design the program builds in ways to gather the data required and allows you to periodically assess the program’s progress toward the goals you identify” (Kellogg Foundation, 2004; p. 16). Furthermore, it makes a considerable difference if the outcomes are developed before planning the activities as it clarifies the steps in the process toward the program’s outcome. In other words, plan backward, implement forward (Schray, 2006).
Definition of Key
Most of the definitions selected for inclusion here are taken from Scriven (1991). Some of the definitions incorporate Scriven’s comments and additional thoughts
he may have chosen to include. The page number indicating the beginning of the discussion of each word or phrase is indicated in the parenthesis immediately following the word. Any comments or parentheses he has used are included as he made them. Words are highlighted, italicized, bolded, or capitalized.
Logic - The relationship between elements and between an element and the whole in a set of objects, individuals, principles, or events.
Model - A schematic description of a system, theory, or phenomenon that s for its known or inferred properties and may be used for further study of its characteristics
Activities – The tools, processes, events and actions that are undertaken to achieve the intended outcomes of the project, e.g. parent workshops, logic model training, and dental screening.
Assumptions/Guiding principles/Values – The values and biases that influence the focus of the work and/or the way in which the work is done, e.g. parents are a child’s first and best teacher, diversity should be celebrated.
Baseline - Data about the condition or performance prior to the intervention (i.e. Statewide % of low-birth weight; number of vandalism reports; standardized test scores; school dropout rate; number of accidents involving teen drivers; recidivism rate; water quality).
Benchmarks - Performance data used either as a baseline against which to compare future performance or as a marker to assess progress towards a
goal (i.e., periodic behavior checklist ratings; amount of debt; academic grades).
Contextual factors – Factors external to the project which may affect its effectiveness, and the intended outputs and outcomes, e.g. state budget, local policies, and skill base of workforce.
Data sources – The resource from which the indicators will be tracked, e.g. parent surveys, databases, standardized tests.
Goals – A goal is a statement of what the program intend to accomplish and must be operationalized by specific, time framed objective often stated as an overt behavior. They are strength based and usually require the efforts of more than one program or agency, e.g. all children are healthy, seniors live independently, and communities are ive and self-sufficient.
Indicators – Quantifiable proxies of the intended changes, which are sometimes referred to as measures, e.g. number/percentage of children with health insurance, percentage of participants who demonstrate improved knowledge.
Inputs – Human, financial, organizational, and community resources available to direct toward doing the work, e.g. staffing, in-kind volunteers and revenues.
Objectives – Measurable steps demonstrating progress towards expected change/outcomes.
Outputs – Direct products of program activities, including types, levels, and targets of services, e.g. number of trainings, number of people reached through workshops, client caseload, community attending an event.
Outcomes – Specific changes resulting from the project. They can be at the individual, organizational, community and system level, e.g. X community is safe, parents have knowledge of and access to community resources. Types of outcomes:
Short Term (1-3 years)
Interim (4-6 years)
Long Term Impact (7-10 years)
Problems/Issues – The specific reason why a particular effort was undertaken. Tends to be need based, e.g. increasing risky behavior among adolescents, poor birth outcomes, access to health care.
Strategies – Groups of activities which reflect a pattern of action and behavior, e.g. parent education and training, community engagement, and public education.
Stage Setting - Orientation, expectations and background information
Visioning - Start logic modeling process with looking at the program’s intended results
Scanning - Continue logic modeling process with looking at the program’s inputs and outputs
Modeling - Putting together the work of the previous two stages into a logic model
ing - Testing the model that has been developed
Wrapping up - Developing an action plan for using the logic model
Evaluation - The systematic collection and analysis of information to determine the worth of a curriculum, program or activity.
Evaluation Anxiety - Anxiety provoked by the prospect, imagined possibility, or occurrence of an evaluation.
General Positive Bias - There is a strong General Positive Bias (GPB) across all evaluation fields—a tendency to turn in more favorable results than are justified. GPB is pervasive in program evaluation mainly because of roleconflict. The evaluator is a staff member, a contractor, or a consultant, and in that role knows that his or her own chance of future employment or contracts usually depends on or is enhanced by giving a favorable report on
the program. GPB can only be controlled by methods explicitly aimed at it; for example, by developing and enforcing strict standards for evaluation, by regular use of meta-evaluation, by explicitly rewarding justified criticism, by taking action against supervisors that exhibit or tolerate GPB, by improving professional training and raising the consciousness of professionals in other ways, and by setting up independent evaluation units.
Logic of Evaluation - The key function of evaluative inference is moving validly to evaluative conclusions from factual (and of course definitional) premises; so the key task of the logic of evaluation is to show how this can be justified. Doing this is a task that was and still is thought to be impossible by most logicians and scientists—social scientists in particular (Trochim, 1985; TenBrink, 1974).
Psychology of Evaluation - A little-explored domain which naturally divides into four parts—the psychology of the evaluator, the evaluee, the client, and the audiences for the evaluation. … Evaluation is a risky business—for the evaluator as well as the evaluee—and the causes of this are largely psychological. Evaluation threatens us where we live by raising the possibility of criticism of ourselves—or of our work, which we often see as an extension of ourselves—and, more mundanely, it may raise a threat to our job. Those possibilities are enough to raise anxiety in entirely sensible people, The immature or unbalanced individual or the pseudo-professional, on the other hand, reacts with an inappropriate level of anxiety, fear, hostility, and anger, often leading to incapacitating affect, unprofessional countermeasures, bizarre rationalizations like the doctrine of value-free science, or self-serving policies of incestuous evaluation. On the other side, of course, doing evaluation may represent an unhealthy lust for power rather than just the search for knowledge or the desire to provide a service to consumers and future consumers, service providers, citizens and other legitimate audiences (Scriven, 1973).
CHAPTER II
A Brief History of Logic Model Program Framework
The literature does not reveal a clear chronological time line of the development and application of logic models, although several program framework with deferent names begin to emerge approximately fifty years ago. The logic model concept itself is probably not a new one, as there have always been proponents of using program models—whether or not referred to as logic models—in the planning and the developing of plans and programs (Bickman, 1987 - 2000).
Furthermore, practitioners began using the Logical Framework Approach as early as 1960s in an effort to improve program design, implementation and evaluation while targeting international projects. The World Bank was one of the first institutions to espouse program evaluation and noted that “the logical framework is a systematic approach for conceptualizing programs and an analytic tool that has the ability to convey the idea of a complex project clearly on a very succinct manner. The World Bank stated that logical frameworks helped project designers and stakeholders:
• Set proper objectives.
• Define indicators of success.
• Identify key activity clusters (project components).
• Define critical assumptions on which the program is based.
• Identify means of ing project accomplishments.
• Define resources required for implementation (Operations Policy Department, 1996, P.5).
The concept of logical framework adopted the name of logical model as it began to be used as a framework to direct health and social welfare programs development and evaluation in the 1980s. The increasing acceptance of logical models in the design and evaluation of programs was exponentially amplified in 1996 when the United Way of America published Measuring Program Outcomes: A Practical Approach. The book helped to elucidate the main components of logical models (inputs; activities; outputs; and outcomes) as the sequence of events that links program investments to results (McLaughlin & Jordan, 1999).
More recent models may include a research component column, although research findings are not used consistently or effectively. Logical models spread through the United Way network, and in spite of its limitations, it helped to create a program framework culture among United Way Centers, s and constituents. Currently, the majority of United Way subsidized programs are designed and evaluated within the parameters of Logical Models. Every year the United Way allocates tens of billions of dollars to d agencies that successfully report program outcomes based on logical models (Bennett & Rockwell, 1995).
Furthermore, the W. K. Kellogg Foundation reinforced the intrinsic worth of logic models as a viable framework for program development with a publication entitled the “Logic Model Development Guide”. In this publication they looked
at logic models that were useful to for-profit and non-profit entities. The Kellogg Foundation ed the activities and the outcomes approaches to logic models that appeared in the United Way of America publication. However, they further expanded the rational aspect of the logic model to include problem or issue, community needs/assets, and desired results (outputs, outcomes, and impact) in the program planning model. When the real work of creating a logic model to frame the evaluation questions was completed, the logic model created by the Kellogg Foundation looked very much like the United Way of America model.
It is noteworthy that the design of the World Bank, United Way, and Kellogg Foundation does not focus on the connections between the program science, the program operations, and expected outcomes. I lieu of the growing pressure towards evidence-based ability, it is paramount for logic models to demonstrate evidence-based connection between program’s applied factors, systematically arranged, and their impact on outcomes (O’ Sullivan, 2004).
A Brief Historical Overview of Program Theory
Theory-driven program development and evaluation first appeared in the professional literature around four decades ago. In the late 1960s, Suchman was one of the first to refer to “programs’ theories” and introduced the idea of using a “chain of objectives” to look at a program’s outcomes. According to Suchman (1967) there were two main reasons for an unsuccessful program: either failure to implement the intended activities or failure of the activities to produce the intended outcomes. Suchman paved the way for the explicit insertion of program theories in the systematic design, implementation and evaluation of projects. In fact, Bickman (2000) called Suchman the “pioneer” of program theory who started the ball rolling by suggesting the development and application of theories in the development and evaluation of projects. Many other scholars gave continuity to the work and ideas of am Suchman in an effort to clarify and expand underlying assumptions or program theories to describe, explain and predict outcomes.
Another early effort for making explicit the underlying assumptions (the program theory) in program design and outcome evaluations was made by Glaser and Strauss in their book on the discovery of “grounded theory” (1967). Moreover, in 1972, Weiss recommended using program theories to guide program evaluation processes. Weis went on to explain that a scientific evaluation of a program designed within the parameters of logic model could exponentially facilitate the identification of factors responsible for outcomes. The identification of such factors can greatly increase the ability to make adjustment on program goals/objectives and activities to maximize results (Weis, 1972 - 2000).
A few years later, Fitz-Gibbon and Morris (1975) introduced a ground breaking description of theory-based program framework while emphasizing the need to project evaluation on explicit theory, which “attempts to explain how the program produces the desired effects” (p.1). Wholey (1979) added to the revolutionizing notion of inserting explicit and elucidating assumptions as an important component of program models by introducing the revolutionizing idea of evaluability assessment. Evaluability assessment is a process prior to commencing an evaluation in which a program’s goals, objectives, activities and expected outputs are examined to determine their redness for evaluation.
Furthermore, theory-based program design and evaluation became standardized in the nineteen eighties, with prominent scholars advocating the merits of theoretical modeling and evidence based practice approach (Wholey, 1987). This strong approach for theoretical modeling generated the first conceptualization of a “program model, which was developed in 1983 by Chen and Rossi as shown bellow in figure 2.
Figure 2: Chen and Rossi’s Generalized Model for Program Development and Evaluation
This model was obviously very complex, but it signified the first step in hypothesizing the elements of a program on a diagram form. Since 1990s great strides have been made in the development of statistical analysis software (lisrel, eqs, amos, mplus) aiming to test hypothesized relationships among latent variables and respective constructs in diagramed models. Greater advancements, however, remain to be made in the areas of scientifically constructed project models leading to appropriate goodness-of-fit indices to improve models based on theoretical justification. Thus, the inclusion of theoretical framework as infrastructural basis in the development of logic models is paramount. When the theoretical framework of a program is well established and clarified it becomes much easier for evaluators using sophisticated statistical analysis software ( SPSS, AMOS, LISREL, EQS etc.) to re-specify the model which may lead to adjustment and improvement of the model allowing identification of strengths and limitations of a program.
At the early stages of logic model inception in the for profit and nonprofit world a significant progression of data driven program development was forged by starting from planning, data collection, analysis and action / improvement s shown bellow (Wholey, 1987). Figure 3 depicts one of the first logic models for program development.
Figure 3:Early logic model diagram
A well designed and explained logic model is the groundwork of a program planning leading to effective evaluation process. As strategic decisions are informed by evaluation findings, stakeholders and staff often find themselves into the program planning ongoing adjustment cycle. All program adjustments, however, should be groomed within the theoretical framework selected for the project.
Program theory and logic models have revolutionized the field of program development and it is now widely adopted. Most funders—whether foundations, other non-profits or governments—now require the inclusion of program theory in grant applications. The United Way of America and the W.K. Kellogg Foundation have invested a significant amount of resources—money, staff and efforts—to equip grantees with the basic skills to develop programs using logic models and theoretical frameworks. A theory is a collection of tested hypothesis and a hypothesis is an educated guess as an answer to a scientific question. In essence, a program is a hypothesis as it attempts to answer to the needs of a community. Program theory is simply a conceptualization of the hypothesized approach that underlies a program (Rockwell & Bennett, 2004).
For the sake of clarification, a logic model is not a theory. A model is a graphical representation of a concept, and a theory is a collection of interrelated hypothesis that have been tested and show to have merit. Theories are used to describe, explain and predict the relationship among variables in a model. Subsequently, the chosen underlying theory should drive the design, implementation and evaluation of the program (Stake, 1976). The theory of a program often exists implicitly as part of the program design, but hardly ever it is formally stated. Since the underlying theory drives the model it is essential to incorporate an explicit theoretical framework including:
• Name of the theory you chose
• Who developed the theory
How the theory describes, explain and predict the relationship of the variables/components shown in the model. The theory is the foundation that s all components of a program and it drive activities related to outputs and outcomes in the proposed model. Ultimately, the role of a theory is to describe and explain the components in the model leading to predictions of expected outcomes. In many cases, the program theory is developed by the program designer based on a review of the relevant literature and or causal mechanisms. In an effort for making explicit the underlying assumptions – theory - in program design, the Kellogg Foundation (2004), provided a comprehensive descriptions of three approaches to logic models, which will be summarized here under the following subtopics: Theory Approach Models, Outcomes Approach Models and Activities Approach Models (Spaulding, 2008).
Theory Approach Logic Model
This particular program model approach emphasizes the theory of change, which has influenced the design and plan for the framework introduced in this handbook. By emphasizing the underlying theory, while demonstrating the key components of that theory helps to the rational of why the program exists. As illustrated in the following table, the main program resources and inputs would be listed, followed by a column listing the suggested program activities/strategies to be operationalized through the identified resource or activity. Each of these activities would then be linked directly to the problem/issue addressed by this step of the program. In listing the expected impact that each of the listed activities and resources should address, an implied theoretical explanation of how the program would work and why it should work would be provided. This theory approach is, according to the Kellogg Foundation, most useful during the planning and design phases of the program (Kellogg Foundation, 2001; Sharfman & Fernando, 2008). The following is a simplified depicture of the theory approach model:
Table 1: Theory Approach Logic Model
Theory/ Assumptions/ Reasons Resources/ Inputs Activities/ Solution Strategies (Assumptions) (Resources) (Activities)
Outcomes Approach Logic Model
The outcome approach model is used during the early, initial planning phase of a program. While theoretically based assumptions are made, they are not the focus of the outcome approach model program design. Instead, the outcome model approach attempts to link various resources and inputs available to the program with the corresponding activities. The activities are more directly associated with the expected, desired overall results or impact for the purpose of constructing an effective and workable program. Outcomes are the focus here, and since the outcomes do not necessarily occur immediately, and are not necessarily measurable immediately at the conclusion of the activities, the outcomes in question are usually divided into short term outcomes, long term outcomes, and ultimately, the desired impact as depicted in Table 2.
Table 2: Outcomes Approach Logic Model
Assumptions Resources/ Inputs Activities Outputs/ Issues Short Term Outcomes (1 – 3 y (Assumptions) (Resources) (Activities) (Issue)s (Short)
As this model emphasizes the link between activities, resources and the expected results, it would be rather suitable for addressing future program evaluation.
Activities Approach Logic Model
The main focus of the activity approach model is on the implementation process. This model would include a very specific, detailed listing of the planned activities of the program. Again, assumptions are made, and resources and inputs are linked, but the focus here is to link the activities and resources with the detailed activities and steps necessary to implement the program. By detailing the activities while linking them with each corresponding implementation step, this model would be used to map the processes and success associated with implementing the program in an effective manner. Thus, the activity approach model is rather suitable for data driven management decisions that will take place during the program implementation process as depicted in Table 3.
Table 3: Activities Approach Logic Model
Assumptions Resources/ Inputs Activities/ Detailed Steps Outputs/ Program Implementatio (Assumptions) (Resources) (Activities) (Issue)s
Advantages of Using Logic Models
The benefits of using logic for program modeling development is well documented in the current literature. A PowerPoint slide presentation made by Ellen Taylor- Powell in March, 2005 explains some of the most compelling advantages in using logic models for program development. Taylor-Powell (2005) states that “logic models demonstrate ability with focus on outcomes…Links activities to results: prevents mismatches…integrates planning, implementation, evaluation and reporting…creates understanding… promotes learning… it is a way of thinking—not just a pretty graphic”.
Another great advantage for using logic model framework for program development resides in the ability of incorporating the basic characteristics of a project with a particular focus on the resources and activities necessary to maximize outcomes. The systematic and visual arrangement of elements in the model is carefully designed to elucidate the sequence of events that links program investments to results with the ultimate purpose of measuring and identifying the main factors influencing results.
Conversely, logic models represent a means to frame the components of a project in a replicable, evaluable and valid/reliable manner. Higher levels of evaluability of a project can lead to higher levels of efficiency in meeting project’s goals and objectives. When the goals of a program are successfully met, the desire to replicate the program increases. When the relationship among all of the components of a project is clearly diagramed it provides coherence, logic and clarification of data availability. Subsequently, theoretical modeling increases evaluability assessment’ and allows managers or stakeholders to use evaluation findings to efficiently direct program resources and efforts.
Logic modeling is advantageous to evaluators as indicators of evaluations are
built in the program design, thus facilitating methodological options at various stages in the evaluation process. It allows program managers and evaluators to identify specific strengths and limitations of the program and indicate how to maximize outcomes. One of the greatest advantages of program modeling is that it provides “information crucial to successful adaptation of a program into new settings” (Hacsi, 2000; p.72). Logic models can help identify key measures and help put the evaluation plan into the context of program implementation essentially facilitating corrections toward improvement of outcome. In other words, crucial outcomes can be identified in the logic model and then measured in the evaluation plan (McLaughlin & Jordan, 1999).
Logic model framework are, without question, extremely essential during the planning, deg and implementation phases of a new program. It becomes a road map for the program to travel at different phases of development. A summary of the main advantages of using a logic model includes the following:
• Builds understanding about what the program is, what it is expected to do, and what measures of success will be used
• Helps monitor progress
• Serves as an evaluation framework
• Helps reveal assumptions
• Helps keep program focused
• Promotes communication
• Strengthens case for program investment
• Develops a simple image that reflects how and why a program will work
• Reflects group process
Although, all of these benefits add to the knowledge and practice of program modeling, the greatest advantage of using logic modeling for program development is that it forces good planning. Often, the point of failure of many projects is the lack of a measurable plan to show how activities can lead to outcomes. When a program design employs a logic model, the path that begins with the program activities leading to outcomes is saturated with variables to be used in the evaluation plan (Chen, 1990).
CHAPTER III
Stakeholder Involvement
A program modeling should not be developed by a single person, but it should include the knowledge, expertise and organizational memory of stakeholders. There is a perplexing variation in the literature concerning who or what constitutes a stakeholder and a wide variety of definitions have been presented over the years (Kochan & Rubinstein, 2000). The definition of Stakeholders tend to be based on the level of investment the person has in the Company (i.e., shares), claims (i.e., assertions of a title or right to something), power or influence over the process and outcomes of projects carried out by the Company (Gibson, 2000).
The relevant literature regarding program modeling reveals that over the years, the concept of stakeholders has been streamlined to fit two basic classifications. The first classification, which was sponsored by Freeman (1984), casts a broader net and includes all the individuals that could have a stake in the success of the firm. The second classification suggests a narrower definition of stakeholders on the basis of commitment in the firm’s core economic interests as recommended by (Friedman, & Miles, 2006). In other words, definitions of the term “stakeholder” range from the extremely broad and inclusive to the relatively narrow and exclusive.
Broad Definition of stakeholders.
A consistent pattern in the broad description of stakeholders is the similarity steaming from the definition of Mitroff (1983), which states: “Stakeholders are all those parties who either affect or who are affected by a corporation’s actions,
behavior, and policies.” To Freeman (1984: 46), for example, describe a stakeholder as “any group or individual who can affect or is affected by the achievement of the organization’s objectives.” Alternatively, for Mellahi and Wood (2003: 183), stakeholders are “all that might be affected by the firm’s activities.”
Other broad and relevant definitions of stakeholders are presented by Gibson (2000: 246) referring to stakeholders as “those groups or individuals with whom the organization interacts or has interdependencies” and “any individual or group with power to be a threat or benefit.” In addition, Mitroff (1983) recognizes the importance of power in the identification of a stakeholder, stressing that stakeholders are all those interest groups, parties, actors, claimants, and institutions – both internal and external to the corporation – that exert influence over it. Although Freeman’s definition of stakeholder is the most commonly used in the literature, all these definitions are too vague and broad to the point of transcending impracticability. When the identification of stakeholders hinges on characteristics such as “can affect” and “is affected by” it may lead to a situation where nearly every entity is regarded as a stakeholder.
Narrow Definitions of Stakeholders.
Narrow definitions of stakeholders on the other hand, tend to be based on the level of importance the entity represents for the survival of the firm, or on the existence of a contractual relationship between the stakeholder and the Company. Freeman and Reed (1983: 91) present a narrow definition of a stakeholder as a group “on which the organization is dependent for its continued survival.” Bowie (1988: 112) refers to stakeholders as those “without whose the organization would cease to exist.” Nasi (1995: 19) states that stakeholders “interact with the firm and thus make its operation possible.” Cornell and Shapiro (1987: 5) refer to stakeholders as “claimants” who have “contracts” with the Company. Clarkson’s (1995) define a stakeholder as those that “have, or claim, ownership, rights, or interest in a corporation and its activities.”
The broader definition of stakeholders is too vague and subsequently does not provide a precise description of those who affect or can be affected by the program. On the other hand, the narrow definition provides concise criteria of inclusion for stakeholders, which is desirable. Nevertheless, it tends to consider only those connected with the firm’s core interests such as: Investors, employees, customers and suppliers (Freeman, 2007). A handicapping limitation with a narrower definition is that it often excludes those committed to the social endeavors of the program. The solution is perhaps to find a middle ground where social endeavors and financial governance can be reconciled.
Primary and Secondary Stakeholders.
Furthermore, the relevant literature distinguish between “primary” and “secondary” stakeholders (Carroll, 1993; Clarkson, 1995; Freeman, 1984; McLarney, 2002). Primary stakeholders are defined in the literature as those whose continuing participation is necessary for the survival of the corporation, while all other stakeholders are secondary. Some scholars assert that the interests of both primary and secondary stakeholders should be considered for effective development, implementation and evaluation of programs, since either type can have substantial effects on the well-being of the firm (Gibson, 2000; McLarney, 2002; Mellahi & Wood, 2003).
At any rate, stakeholders are people or organizations that are involved in or affected by the program. In addition, stakeholders are interested in the results of the program’s evaluation. The implications of the literal concept of stakeholders are still used to describe individuals who are linked to an organization based on the benefits the organization can bring them. The traditional definition of a stakeholder is a collection of individuals who can affect or is affected by the achievement of the organization.
History of the Stakeholder Concept
In the mid-1980 the popularization of the stakeholder concept emerged and soon became a theoretical and empirical movement responding to the drive for organizational growth. One of the highlights in this movement was the publication of Richard Edward Freeman. He is generally credited with popularizing the stakeholder concept. The title of his publication was – Strategic Management: A Stakeholder Approach, which came out in 1984.
From a historical perspective, the concept of stakeholder used to be known as “constituencies”, which places it in the ancient s of philosophical discourse about the character of man, the nature of society, governance and strategic planning. The resurgence of the stakeholder concept and it application in an organizational context is attributed to the ground-breaking work done at Stanford Research Institute in 1963, which achieved widespread popularity among academics in the 1960s (Freeman, 2007).
At any rate, many evaluators and researchers have called for higher levels of stakeholder’s involvement in program design and evaluation. Bowie (1988), Cornell and Shapiro (1987, Freeman and Reed (1983) and Trochim (1983) were some of the first to call for more participation of stakeholders in the process of design, implementation and evolution program in an effort to provide dynamism, systemic and a sustainable approach to the maximization of outcome and organization growth. It is conceivable that those that are well informed about the program’s strengths and limitations can make a significant contribution to improve outcomes. Efforts should be made to ensure their participation in the design, implementation and evaluation of the project.
Furthermore, Wholey (1987) recommended the participation of stakeholders to possibly increase the effectiveness of the evaluation. Patton (1989) suggests that the main goal of program modeling is to respond to the perspectives of the key stakeholders while effectively guiding the staff through the program’s
implementation, evaluation and adjustment of goals, which are operationalized by measurable objectives indicated by the program’s activities. Patton asserts that by working with stakeholders to capture their ideas regarding the different phases of a program – input /activities, outputs, outcomes – the resulting program model will be a clear representation of their views and expectations about the program’s process and outcomes). A well designed program model should reflect an inclusive and interactive process centered on stakeholders. Among the stakeholders the program designer often find a deep understanding of the programs context, knowledge and organizational memory to guide the development of the program model (Weiss, 2000; House, 2003; Huebner, 2000).
The current literature is loaded with evaluators and researchers calling for more participation of stakeholders and staff in the program modeling development process. Since organizations often involve staff and key stake holders to set and or adjust project goals, similar approach can be employed by evaluators while pursuing the identification of strengths and limitations linked to program’s result. The involvement of several stakeholders may delay the completion of the project design, but the finish product will be a well-rounded program modeling and at the same time providing opportunity for more program ownership to the stakeholders. This process is more suitable for organizations that operate within an inclusive paradigm, open to share multiple view points and willing to employ new and potentially more effective techniques to maximize program results (McClintock, 2004).
Typical Key Stakeholders of a Program
Key stakeholders of programs often fall into three major groups:
• Persons involved in program operations: Board , management, program staff, partners, funding agencies, coalition and volunteers.
• Persons served or affected by the program: Clients / patients / costumers, advocacy groups, community and elected officials.
• Persons who are intended s of the program’s evaluation results, such as those in decision making position regarding the program outcomes: Board , management, partners, funding agencies and coalition .
In order to reap full benefits of stakeholder participation organizations need to have a proactive stakeholder orientation and to maintain a flexible list of key stakeholders committed to the mission and vision of the organization. That being said every organization has a mission and vision statement. The mission and vision statement should be the main drivers of all logic model components within the organization. The mission is the reason for which the organization was created, it is spelled out in the articles of incorporation and should be at the core of all organizational logic endeavors. Alternatively, the vision statement is a declaration of how the company intends to operationalize its mission. Based on the mission and vision there are some stakeholders that are, in general, more flexible than others in adjusting to a changing environment. Flexibility allows for stakeholders to be varyingly ranked for the sake of conflict resolution.
During the logic model development process a program designer is often presented with the challenge of recognizing and resolving conflicts between the interests of stakeholders. If the problem is not addressed immediately and effectively, it can result in massive loss of time and resources. The most effective conflict resolution approaches documented in the current organizational literature is through the use of several instruments and strategies: the company’s mission statement, leadership displayed by top management, a development of consensus through dialogue leading to alignment of interests, transparency, and the cultivation of a desire to find a win-win solution. Therefore, the identification of key stakeholders need to be spelled out to give broad guidance to program designers and to create uniformity, predictability and balance in managerial actions (Donaldson, 2007; Heckscher et al, 2003; Kassinis & Vafeas, 2002; Luo, 2007; Moneva et al, 2007). Figure 4 has salient features essentially
representing the dynamic aspect of internal and external stakeholders, which is imperative for the formulation of some core strategies to drive the maximization of program outcomes.
Figure 4: Who are the Stakeholders
The categories of stakeholders described above are not mutually exclusive as the primary beneficiaries of programs outcomes are often of the other two groups, i.e., the of the board of directors could interchangeably be clients / patients / costumers. It is important to that the environment in which organizations exist is ever changing, and with it, so does the constantly changing list of current and potential stakeholders. Such organizational pace demands program designers and evaluators to have flexible strategic principles to identify new stakeholders. At any rate, without basic identification criteria program designers would have a very hard time enlisting assistance for the development and evaluation of projects as every manager would have a different list of stakeholders. In order to facilitate the identification and recruitment of stakeholders potentially interested in the development, implementation and evaluation of your program (Friedman & Miles, 2006). Worksheet 1A, Worksheet 1B and Checklist for Recruiting Stakeholders is provided below to facilitate the identification and recruitment of stakeholders.
Worksheet 1A
Table 4: Identifying Key Stakeholders
Category 1 2 3
Stakeholder Who is affected by the program Who is involved in the programs operations Who will use evaluations results
Worksheet 1B
Table 5: What Matters to Stakeholders
Stakeholder What activities and / or outcomes of the program matter most to this stakeholder 1 2 3 4 5 6 7
Checklist for Recruiting Stakeholders___________________________
Identify stakeholders, using the three broad categories discussed: those affected, those involved in operations, and those who will use the evaluation results.
Review the initial list of stakeholders to identify key stakeholders needed to improve credibility, implementation, advocacy, or funding/authorization decisions.
Engage individual stakeholders and/or representatives of stakeholder organizations.
Create a plan for stakeholder involvement and identify areas for stakeholder input.
Target selected stakeholders for regular participation in key steps, including writing the program description, suggesting evaluation questions, choosing evaluation questions, and disseminating evaluation results.
In sum, higher levels of stakeholder engagement will lead to higher levels of project outcomes. It is noteworthy, that proactive stakeholder organizations are more aware of their stakeholder environment as they strive to discover new stakeholders. Additionally, such organizations value these stakeholders and their opinions and for that purpose they treat them with transparency, engaging them in discussions on matters of common interests. In short, they develop solid partnerships with their stakeholders. This emphasis on engaging stakeholders, is
heavily influenced by strategic principles forged by the mission statement of the firm. It would seem evident that mission statements play a central role in giving firms a priority list of stakeholders, which is to be consulted when stakeholder interests collide (Kochan & Rubinstein, 2000).
The Firms Mission Statement and Stakeholders Interests
The mission and vision of the company statements of the company are powerful, intrinsically interwoven tools, but are often confused with each other. The mission of the company centers on the reason for which the organization is formed. The most important function of the mission statement is internal, as it channels the present leading to its future and its target audience is the key stakeholders. The mission should be opperationalized by the company’s vision of the Company. The vision communicates both the purpose and values of the organization as it forges a path toward the future, it should answer the question, “Why are we here?” The vision statement should speak to what the organization represent, not just what the organization does.
Subsequently, the vision should be operationilized by the different programs within the organization and each program should be described by their goals, objectives and activities. Thus, the mission statement plays a central role in giving firms a priority list of stakeholders, which makes imperative to consult the company’s mission statement when stakeholder interests collide. Mission and vision statements are instrumental in creating a stakeholder culture within companies. Although, mission statements are greatly influenced by the owners and top management of the firm, all organizations, depending on their size and the need for sophisticated structures, have ways to incorporate stakeholder views in creating or updating their mission statements. Subsequently, around these mission statements a comprehensive edifice is created to maintain a specific organizational culture, which will drive the implementation and evaluation of the program logic.
CHAPTER IV
Evaluation
One of the greatest advantages of using a logic model for program development is the ability to create interconnected components (inputs, activities, outputs and outcomes), which will later guide the program’s evaluation process. TenBrink (1974) defines evaluation as “the process of obtaining information and using it to form judgments which in turn are to be used in decision making ” (p. 8). Similarly, Alkin (1973) defines evaluation as “the process of ascertaining the decision areas of concern, selecting appropriate information, and collecting and analyzing information in order to report summary data useful to decision makers in selecting among alternatives” (p. 150). Cronbach (1975) broadly defines evaluation as “the collection and use of information to make decisions about a program” (p. 244).
Although some definitions focus on the assessment of goals and objectives, or the process of scientific inquiry, the collection of data, analysis and interpretation to decide a course of action is a common thread in most definitions of evaluation. In summary, evaluation is the systematic process of asking question while using the scientific method to generate answers by gauging the impact of program’s inputs on outcomes. The basic building blocks of a project evaluation are the programs activities used to operationalize the indicators and made readily available to be entered into a data base. Indicators are items or program activities measured to depict the status of the condition of interest. Thus, the collection of indicators across all program areas represents the first step in developing a system of project measures that, periodically, will assist in evaluating the impact of the project on the target population. The findings of a project evaluation should be synthesized and presented with the purpose of showing factors influencing outcomes (Hatry, 1999).
The pressure for ability of program implementation has increased the need for scientific evaluation. Such pressure demands the utilization of methodological research approach to conduct evaluation while distinguishing the difference in purpose between research and evaluation (Sharfman, & Fernando, 2008; Spaulding, 2008 & Weiss, 2000 ). Although some similarities exist between scientific research and evaluation regarding design, implementation and data analysis the main difference may be found in their differing abilities to generalize, and their parallel relationships to decision-making. A program evaluation is purposefully designed and conducted to particularize the findings toward maximization of program outcomes. Alternatively, the ultimate focus of a scientific research is to generalize its findings to inform the development of programs, policies and practice.
The contrasts and similarities between research and evaluation can be further elucidated by the following statement made by Pace and Friedlander (1978):
“Good evaluation, like good science, utilizes measurement and observations that are accurate, reliable, and valid, gathers evidence systematically, and analyzes results objectively. When it comes to purposes, however, science seeks to discover regularities, generalizations, and laws, to test hypotheses, or to for and explain the reasons or causes for what happens. Evaluation is more clearly pragmatic and, most important, explicitly seeks to produce judgments of value, worth, or merit, whereas science shuns such judgments” ( p. 9).
Benefits of program evaluation
An increasing culture of ability is evidenced by the fact that many funders now require program logic models, work plan matrix and project evaluation to be submitted for ability and compliance purposes. While funders have demanded ability from project managers regarding how they used allocated capital and whether they met proposed goals, program managers and evaluators agree that the pressure to maximize outcomes has
reached new heights and shows no sign of relenting (O’ Sullivan, 2004; Rossi et al., 1999).
Although, a program modeling is not an evaluation model or an evaluation method it facilitates the evaluation process as it helps to ensure that the goals and objectives of the program under examination are interconnected and measurable. There are two simple reasons for conducting an evaluation:
To gain information for improving programs as they are being implemented, and
• To determine projects’ effectiveness after they have had time to produce results.
It is imperative, however, for the evaluator to provide a high level of qualitative and quantitative data to enable program directors to develop, and/or adjust goals, and objectives and to compare actual program results with established goals.
Brief Historical Overview of Evaluation
Several scholars writing in the late 1990s and 2000s about program development and evaluation have made great contributions to what has come to be known as the “ability movement” in the United States. Popham (1993) has made one of the most significant contribution in addressing the issue of program ability in the current ability culture evidenced by the following words:
“Once upon a time there was a word. And the word was evaluation. And the word was good. … Teachers used the word in a particular way. Later on, other people used the word in a different way. After a while, nobody knew for sure what the word meant. But they all knew it was a good word. Evaluation was a thing to be cherished. But what kind of a good thing was it? More important, what kind of a good thing is it?” (p.1).
Scientific evaluation was introduced by Joseph Rice when he istered the same spelling test in several American schools in search for factors responsible for curriculum improvement. His interest was in evaluating the curriculum of that time which included spelling drills. The findings of Rice’s evaluation lead to the first data driven curriculum revision. Subsequently, a “measurement movement” (Pace & Friedlander, 1978) or “testing movement” (Cronbach, 1975) in education emerged in the 1920’s and 1930’s, and evaluation “was defined as roughly synonymous with educational measurement” (Pace & Friedlander, 1978, p. 2). During this period scholars began to define term evaluation as its popular usage denoted an explicit concern for purposes and values.
From 1930s to 1940s, the evaluation trend focused on the practice of test giving for professional judgment by comparing the results of one individual with another or with a standard. The work of Ralph Tyler’s in the 1940’s made a significant contribution to re-route the practice of evaluation towards something more than just testing students. Tyler noted that assessing how a program met / exceeded its target objectives was a better way of evaluating program outcomes than by measuring student test scores alone. Lee J. Cronbach later added on Ralph Tyler’s work by explaining that greater benefits to program results could be achieved by assessing outcome measures rather than by comparing a program with another program (Cronbach, 1975; Pace & Friedlander, 1978). After the 1960s Cronbach and Scriven added another significant contribution to the evaluation process by calling attention to the importance of ongoing assessment of outcome measures. This type of ongoing assessment becomes a formalized evaluation method as Scriven gave the process the name of formative evaluation. It is noteworthy that Prior to the 1960s, evaluation of programs were conducted as a summative process.
During the 1970’s and 1980’s evaluation evolved again to meet the demand of an emerging ability era. As a result, evaluation became the “process of identifying and collecting information to help decision makers choose among available alternatives” (Pace & Friedlander, 1978, p. 3). Since 1980 many attempts have been made to standardize evaluation, a trend that arguably still exists today.
Evaluation types
Several basic types of evaluation exist, although the ones used the most are:
• Planning evaluation
• Formative evaluation
• Summative evaluation
• Predictive evaluation
Planning Evaluation
A Planning Evaluation is parameters driven as it focuses in establishing a project’s goals, objectives, indicators, strategies, and timelines. Stevens et al.,
(1997) states that “The product of the Planning Evaluation is a rich, contextladen description of a project, including its major goals and objectives, activities, participants and other major stakeholders, resources, timelines, locale, and intended accomplishments” (p. 4).
Formative Evaluation
Michael Scriven coined the term formative evaluation and referred to it as “outcome evaluation of an intermediate stage in the development of the teaching instrument” (Scriven, 1973, p. 51). Over the years, the term has evolved and expanded and it is currently defined as “a judgment of the strengths and weaknesses of instruction in its developing stages, for purposes of revising the instruction to improve its effectiveness and appeal” (Tessmer, 1993, p. 11). Formative evaluation must be included as a significant part of the program modeling process and should be used to inform corrections in the program implementation process towards maximization of outcomes. Formative evaluations examine the development of the project and may lead to changes in the way the project is structured and carried out. Questions typically asked include:
• To what extent are the activities being conducted according to the plan?
• To what extent are the goals being met
• Which are the main factors responsible for meeting (or not meeting) the goals?
• What barriers were encountered?
• How and to what extent were they overcome?
Additionally, the formative evaluation process gives the evaluator the opportunity to assess the data collection instruments and make corrections if necessary. Formative evaluation is essential for the success of the program as it can indicate where, when and how to make corrections to improve program results. Thus, it is important to consider the different phases of formative evaluation, which include: pre-production formative evaluation, production formative evaluation (also known as progress evaluation and implementation formative evaluation.
The pre-production formative evaluation focuses on gathering information to uncover the strengths and weaknesses of the proposed venture, while ing a decision making process regarding possible outcomes. Production or progress evaluations are conducted to assess if goals and objectives are being met. Hence the program team will have the opportunity to make a data driven adjustment on the program activities and or goals /objectives. Implementation evaluations are conducted to assess if the program is being implemented according to how it was designed (Dick et al., 2001).
Summative evaluations
The Summative evaluation method (also called outcome or impact evaluations) look at what a project has actually accomplished in of its stated goals. Summative evaluation questions include:
• To what extent did the project meet its overall goals?
• What components were the most effective?
• What significant unintended impacts did the project have?
• Is the project replicable and transportable?
For each of these questions, both quantitative data (data expressed in numbers) and qualitative data (data expressed in narratives) can be used to increase accuracy of the findings. The main purposes of summative evaluations are:
• To determine overall project success.
• To determine whether or not specific goals and objectives were achieved.
• To determine if and how participants benefited from the program.
• To determine which components were most (or least) effective.
• To determine any unanticipated outcomes.
• To determine cost vs. benefits.
• To communicate evaluation findings to stakeholders such as: Teachers, participants,
program designers and developers, funding agency, and superiors (Stevens et al., 1997).
Summative evaluation can be subdivided in two distinct types: research-oriented and management-oriented. Evaluations that aimed at improving and validating programs is known as research-oriented. On the other hand, when an evaluation focuses to determine whether or not the program accomplished what was designed to accomplish and at what cost it is known as management-oriented evaluation.
Predictive Evaluation
Predictive Evaluation (PE) is a new approach to evaluation that includes the participation of staff to generate data to assess training value to the company, measures main indicators to ensure the program is on the right course and report in a business format that executives easily understand. Program managers can benefit from predictive evaluation in the areas of: “budget estimation, time estimation, guesses about materials that will be used and the skills that will be required” (Braden, 1992, p. 16). The key evaluation points of the PE method target the assessment of the following factors: Intention, Adoption, and Output. The training process generates different levels of motivation for the participant to use what they learned. This type of motivation is termed Intention. According to the level of intention participants develop, they will subsequently adopt (apply) the new skills they acquired in the training process as part of their work behavior. Adopted behaviors (acquired during training and practiced over time) are translated into outputs as shown in Figure 5.
Figure 5: Predictive Evaluation Points
Thus, program outcomes are directly and indirectly associated with all three factors:
1. Intention
2. Adoption
3. Outputs.
Ultimately, Predictive Evaluation (PE) approach provides stakeholders with the following results:
• Knowledge: New knowledge or a refresher of current knowledge.
• Skills: New or improved techniques to reach outcomes.
• Beliefs: The notion that the staff and board can benefit from using the evaluation results.
• Behaviors: On-the-job practices modified to maximize the company’s business.
This chapter focuses on some of the evaluation models most used in the current evidence based context. Table 6, however, provides a depiction of major evaluation approaches in a taxonomy created by House (1980, p. 23).
Table 6: A Taxonomy of Major Evaluation Types
Model System analysis Behavioral objectives Decision making Goal free Art criticism Professional review Quasi legal Case study
Major Audiences Or Reference Groups Economists, Managers Managers, psychologist Decision makers, specialty s Consumers Connoisseurs, consumers Professionals, public jury Jury Client, practitioners
Assumes Consensus on Goals known causes and eff Pre-specified objectives, qu General goals, criteria Consequences, criteria Critics, standards Criteria, , procedures Procedures and judges Negotiations, activities
Although, this chapter exposes the reader to several evaluation designs it would be incomplete without a discussion on instructional evaluation. Most of instructional design models contain both formative and summative evaluation components. Formative evaluation is often placed within every facet of most instructional development models as it “distinguishes the instructional design process from a philosophical or theoretical approach. Rather than speculating about the instructional effectiveness of your materials, you will be testing them with learners” (Dick et al., 2001, p. 302). The following two instructional logical models were selected for their distinction in the field of instructional design. Note that evaluation is a pertinent component in both models shown in Figure 6 and Figure 7.
Figure 6: The CAI Design Model (Hannafin and Peck Design Model)
The CAI design model seen in Figure 6 is a computer-assisted instruction, which coalesce its steps into phases. Normally, the model flows from left to right; however, “Evaluation and Revision” are the only factors that can move their respective variables from one phase to the next. This model has no preestablished stopping point; in addition, it highlights the importance of using formative evaluation to maximize outcomes (Hannafin & Peck, 1988).
An extensive review of the current program development literature shows the most common model used for instructional design is the Dick and Carey Model (Dick et al., 2001). This model uses system theory as it integrates the ideas of synthesis, reductionism and holism viewing instruction as a whole, and at the same time considering that each component is necessary for successful learning. The teacher, student, instructional materials, and the learning environments are all essential components of this systemic instructional process as shown in Figure 7.
Figure 7: The Dick and Carey System Approach Model
The Dick and Carey Model incorporate formative evaluation as a fundamental step within the design process. Moreover, summative evaluation is included in the model as the last step and it is to be conducted after the instructional activities have had enough time to produce results.
Problems Regarding Evaluation
Evaluation has two key problems: Lack of public and low levels of validity / reliability. The concept of validity and reliability are problem the most important component of any evaluation plan. Validity is defined by the extent we measure what we intend to measure (and what we think we are measuring. Alternatively, reliability is defined as the extent to which the same measurement process yields similar results every time it is applied. In other words, validity is concerned with accuracy of the measurement and reliability is concerned with consistency - does the same measurement instrument yield consistent results when repeated over time?. Subsequently, there cannot be validity without reliability.
An extensive review of the relevant literature showed the following primary reasons for invalid evaluations:
• Fears of negative evaluation findings outweigh the benefits of evaluation results - The false understanding that finding out “what does not work” can negatively impact stakeholders while forsaken the long term benefits. The elimination of program weakness can lead to greater outcomes. In other words, the identification of factors responsible for positive outcomes allow program managers to focus resources on the essential components of the program model that benefit stakeholders. Alternative, the identification of factors associated with program limitations allows program managers to make corrections to improve their service delivery models. Not knowing
what is working particularly during the course of program implementation lead to the misuse of valuable time and resources.
• Lack of a clear understanding regarding the benefits of evaluation results Such a lack of understanding may lead to an inadequate evaluation budget to afford the acquisition of appropriate resources (such as skilled and experienced evaluators in the area in question).
• Lack of clear, rigorous expectations - The lack of clear, rigorous expectations may lead to an understanding of what is important to measure. Subsequently, evaluation findings don’t match key performance measures (Abernathy, 1999).
• Evaluation misinterpreted as an adversarial process - Evaluation is often viewed as an adversarial component of a program. The misconception exists that evaluation is something done to a project following the program implementation and conducted by an outside group who consequently places judgment upon the effectiveness of the program (Cronbach, 1975; Kinnaman, 1992; Scriven, 1974; Stevens et al., 1997; Valdez, 2000b). Although clinical wisdom often show that this type of judgment and consequence happens in practice, evaluations should strive to demonstrate to stakeholders that a program is worthwhile. Evaluation findings can serve as an effective marketing tool for recruiting potential key stakeholders. Furthermore, many funders clearly estipulate in their funding guidelines that they will not fund or re-fund any project until an evaluation is conducted and outcomes have been demonstrated. Although many miss conceptions regarding evaluation have been dispelled, it still has a tarnished reputation among program managers. Lots of progress has been made regarding the increase of public or acceptance of evaluation as a tool to improve program. Nevertheless, the word evaluation still evokes a negative response in many professional circles as it conveys the risk of exposing program weaknesses while conjuring up fears that someone’s position or program may be in jeopardy.
The findings of a research conducted by Hanushek & Rivkin (2007) indicated that negative biases are still associated with evaluations even among teachers. The researchers asked pre-service and in-service teachers, “What comes to your mind when you hear the word evaluation?” Table 7 display their responses, with the words with highest degree of frequency listed first.
Table 7: Pre-service and in-service teachers’ responses to hearing the word evaluation
Pre-Service Teachers In-Service Teachers Tests Grades Achievement Unfair Judgment Tests Measurement Grades ability Invas
It is evident that professionals, including teachers, need more exposure to evaluation literacy and training. Even with growing for evaluation, it still has yet to gain acceptance among professionals. The benefits of good evaluation outweigh any possible shortcomings. A well conducted evaluation is fundamental to the continued development of a profession as it can serve several important purposes in the development and refinement of goals, objectives and overall program improvement. Over the next few years, organizations will continue to face powerful pressures to reduce expenses while increasing revenue by making ground breaking, riskier investments as a response to a rapidly shafting market. Subsequently, project evaluations, which estimate the effects of activities on desired outcomes using statistical analysis to find best model fit will become a vital part of programs logic models. Therefore, sophisticated statistical software (SPSS and AMOS), which drive statistical designs such as structural equation modeling to determine best model fit, will be increasingly in demand.
Using Structural Equation Modeling for Program Evaluation
Using structural equation modeling (SEM) in program evaluation provides evaluation practitioners the opportunity to empirically test logical relationships among tiers of outcomes (output, outcome and impact) to understand the processes and mechanisms through which programs achieve their intended goals. Structural Equation Modeling (SEM) is a powerful analytic tool that uses hypothesized, diagrammed models to examine how sets of variables define constructs and how these constructs are related to each other through the use of two main sets of equations:
• Measurement equations and
• Structural equations.
The measurement equations describe the relationship between the measured variables and the theoretical constructs that presume to underlie them. This set of equations allows the assessment of accuracy of the proposed measurements. Alternatively, the structural equations on the other hand, express the hypothesized relationships between the theoretical constructs, which allow the assessment of the proposed theory. It also considers the modeling of interactions, nonlinearities, correlated independents, measurement error, correlated error , and multiple latent independents each measured by multiple indicators.
In order to apply SEM in estimating relationships among variables, the AMOS software program may be used to analyze and test the validity of the model while identifying main predictors. Unlike conventional analysis, SEM allows the inclusion of latent variables into the analyses (Kline, 1998). Besides, SEM is not limited to relationships among observed variables and constructs; it allows the study to measure any combination of relationships while examining a series of dependent relationships simultaneously (Kline, 1998).
The difference between SEM and other conventional methods of statistical analysis is accentuated by significantly distinct characteristics. For example, the basic statistic in SEM is the covariance. While conventional regression analysis attempt to minimize differences between observed and expected individual cases, SEM aims to minimize differences between observed and expected covariance matrices. In other words, SEM, based on the covariance statistic, attempts “to understand patterns of correlations among a set of variables and to explain as much of their variances” (Kline, 1998, pp. 10-11). It is worth to note that covariance statistics convey more information than a correlation (Kaplan, 1995; Pedhazur & Pedhazur, 1991).
One of the most attractive features about structural equation modeling to assess program logic models is that it takes into potential errors of measurement in all variables and provides empirical suggestions for improved model fit. For example, when validating the measurement equations
confirmatory factor analysis should be used. Thus, the observed variables should be diagrammed in AMOS and linked to an SPSS data file to test if the indicator variables are acceptable in defining the latent variable. Therefore, separate confirmatory factor models should be run for each set of observed variables hypothesized to indicate their respective latent variable.
Thus, observed variables should be diagrammed in AMOS and linked to an SPSS data file to test if the indicator variables were acceptable in defining the latent variable. If the variances of the indicator variables are similar it should be set equal in the CFA model (er1 = er1 = er1), which will help in model identification. The percent variance explained should be calculated as the sum of the communalities divided by the number of variables. Each factor model should have acceptable model fit. For example, the Goodness-of-Fit index (GFI = 1.00) should be above .90 or above. In other words, indicated that 90% (or above ) of the variance-covariance among the observed variables in the sample matrix is reproduced by the hypothesized confirmatory factor model. Confirmatory factor model fit statistics should be close to the p < .05 level of significance. Additional model fit statistic of chi-square divided by degrees of freedom should indicate acceptable model fit.
If the factor model does not have acceptable fit statistics correlation of error covariance should be included in an effort to improve model fit. That is, including correlated measurement error in the model tests the possibility that indicator variables correlate not just because of being caused by a common factor, but also due to common or correlated unmeasured variables. This possibility would be ruled out if the fit of the model specifying uncorrelated error was as good as the model with correlated error specified. In this way, testing of the confirmatory factor model may well be a desirable validation stage preliminary to the main use of SEM to model the causal relations among latent variables (Schumacker & de Carvalho, 2012).
Structural Equation Evaluation Questions
A structural equation model should be hypothesized to explain the relationship among the latent variables defined by the confirmatory factor models or measurement models. The identification of the hypothesized model should follow the subsequent steps of SEM: 1) determine input matrix and estimation method, (2) assess the identification of the model, (3) evaluate the model fit, and (4) re-specify the model and evaluate the fit of the revised model. In step one, the Maximum Likelihood method (ML) should be utilized for the proposed model. Maximum likelihood is the procedure of finding the value of one or more parameters for a given statistic which makes the known likelihood (the hypothetical probability that an event that has already occurred would yield a specific outcome distribution) the maximum value of a set of elements. Considering the current set of observations, the method of maximum likelihood finds the parameters of the model that are most consistent with these observations.
The parameters of the model are: (1) variances and covariances of latent variables, (2) direct effects (path coefficients) on the dependent variable, and (3) variances of the disturbances (residual errors). In step two, assessment of the ability of the proposed model to generate unique solutions should be conducted. The hypothesized model should be tested by using two most popular ways of evaluating model fit: The X² goodness-of-fit statistic and fix indices. The statistics literature shows no consistent standards for what is considered an acceptable model; a lower chi square to df rations indicates a better model fit (Schumacker & Lomax, 2004).
Due to Chi-square’s sensitivity to sample size, it is not easy to gain a good sense of fit solely from the X² value. Thus, other indexes of model fit should be examined. Indexes of model fit may make adjustments for sample size and model complexity. Hence, other fit indices may be utilized to evaluate model fit: GFI (Goodness-of-Fit Index), AGFI (Adjusted Goodness-of-Fit Indices), CFI (Comparative Fit Index), SRMR (Standardized Root Mean Squared Residual) RMR (root mean square residual) andRMSEA (Root Mean Square Error of Approximation).
GFI represents the overall degree of fit, which are the squared residuals. Values of .90 or above for the GFI indicate a good fit and values below 0.90 simply suggest that the model can be improved. On the other hand, AGFI is the Adjusted Goodness of Fit Index. It considers the degrees of freedom available for testing the model. Values above 0.90 are acceptable, indicating that the model fits the data well. SRMR is the Standardized Root Mean Squared Residual, and it is a standardized summary of the average covariance residuals. SRMR should be less than .10. (Schumacker & de Carvalho, 2012).
An effective model identification process allows a calculation of the estimate for all the parameters independently and for the model as a whole. In step three, the overall model fit (the goodness of fit between the hypothesized model and the sample data) should be assessed with several goodness-of-fit indexes. Although, chi-square statistics is one of the most commonly used techniques to examine overall model fit. A non-significant goodness-of-fit X² statistic is favored because it indicates that the implied covariance matrix is nearly identical to the observed data. If the estimated covariance matrix does not provide a reasonable and parsimonious explanation of the data then the model may be re-specified by changing model parameters.
Lastly, an adjustment of the hypothesized model is conducted by examining the goodness-of-fit indices to improve the model based on theoretical justification as the model is re-specified. The estimated covariance matrix may or may not provide a reasonable and parsimonious explanation of the data, which may lead to the model being accepted or rejected. Thus, an adjustment and improvement of the model allows identification of data related problems and potential sources of poor fit. Furthermore, the adjustment process can provide new insights regarding the relationship between observed and latent variables.
Once the final model was specified through an over identification process, the next step is to test the apparent validity of estimates of the parameters. Thus, the hypothesized model should be tested statistically to determine the extent to
which the proposed model is consistent with the sample data, which includes the fit of the model as a whole and the fit of individual parameters. The next step is to assess the fit of the hypothesized model and the sample data by examining the parameter estimates, standard errors and significance of the parameter estimates, squared multiple correlation coefficients for the equations, the fit statistics, standardized residuals and the modification indices. Every time the model is specified in AMOS it goes through thousands of program model possibilities until it finds the best model fit. In other words, it not only determines whether the observed variables are good indicators of the latent variables, but also shows the limitations and strengths of the program by identifying which variables have the highest factor loading (validity coefficient) and corresponding communality estimate. In addition, it makes empirical suggestions of how to improve the program model by ing the exact structure of the relationship among the latent variables (Byrne, 2001).
Establishment of Evaluation Plan
The process of developing an evaluation plan is faced with many challenges, but the most crucial is the acquisition of key information to answer the evaluation questions. From the outset it is important to have agreement among the stakeholders regarding how goals and objectives are operationalized and subsequently what program success looks like. The following table provides an opportunity to generate and organize a set of indicators. The data collection is often driven by the identification of the program indicators, which will guide the construction of relevant items in the measurement instrument and reporting strategies.
Table 8: Evaluation Plan
Focus Area Influential Factors
Indicators Measures of influential factors – may require general population survey
Resources Activities Outputs Outcomes & Impacts
Logs or reports of financial/staffing status. Descriptions of planned activities. Logs or reports of actual activities. D Logs or reports of actual activities. Actual products delivered. Participant attitudes, knowledge, skills, intentions, and/or behaviors tho
This table was adapted from A Hands-on Guide to Planning and Evaluation (1993) available from the National AIDS Clearinghouse, Canada.
The use of a logic model framework in program design affords the necessary elements to assemble a very doable evaluation plan as depicted in figure 8.
Figure 8: The Evaluation System Approach Model
Another key role of the logical model is to allow the development of clear and precise questions, which the evaluation should answer (Cooksy, et al., 2001). According to Kellogg Foundation (2004), a well structured evaluation plan should contain the following elements:
Purpose for the Evaluation: identify the need and purpose for the evaluation.
Evaluation Questions: Succinctly specify questions the evaluation will answer The evaluation questions should be aligned with the purpose.
Assessment Methods: Develop an overall design strategy to answer the evaluation questions, including how to collect and analyze data.
Evaluation Team: identify the size of the evaluation team and specific skills required for each of the evaluators.
Assessment Procedures: specify the various procedures, activities, duration and schedule to be undertaken.
Presentation and Use: Develop a succinct narrative showing how the evaluation will be presented and how evaluation findings will be used (Boulmetis & Dutwin, 2005).
Once your evaluation plan is completed is important to the feasibility and quality of it. Thus, Table 9 provides a guidepost to assess the worth of your
evaluation plan:
Table 9: Checklist for Feasibility and Quality of Evaluation Plan
Establishing Indicators Quality Criteria 1. 2. 3. 4. 5. 6. 7.
Yes The focus areas reflect the questions asked by a varie Indicators are SMART– Specific, Measurable, Action The cost of collecting data on the indicators is within Source of data is known. It is clear what data collection, management, and ana Strategies and required technical assistance have bee The technical assistance needed is available.
CHAPTER V
The process of developing a logic model is fairly easy. Nevertheless, it can be challenging at times, particularly for those without much experience in research methods, program design and evaluation. There is a systematic process involved in linking the components of the program. Thus, the following program framework is intended to lead a committed group of stakeholders through the process of constructing a logic model based program, while facilitating the evaluation process to maximize outcomes. By involving various stakeholders in the program development process will increase the chances for program’s sustainability and success. If the stakeholders are engaged in the programs design it is more likely that they will evaluation efforts and corrections towards maximization of outcomes. A successful stakeholder engagement in the logic model development is contingent on the program designers and managers being amenable to hearing multiple perspectives and willing to commit the time and energy to plan, design, implement and evaluate.
The group engaged in the logic model development process can be made up of any combination of 5-10 stakeholders. The logic model development process group will be making crucial decisions regarding the project. Therefore, these individuals should be familiar with the mission and the vision of the company in addition to having an understanding about the impact of the project within the social, financial and political context in the environment where the project will be implemented. Relevant stakeholders to be considered in the logic model development process in an organization include:
• Project leader
• Project team
• Upper management
• Project customer
• Investors
• Suppliers
• Local community leaders
Furthermore, expectations and responsibilities must be established to guide the process of developing the program logic model within a flexible timeline. Additionally, an agenda should be developed to guide such a process, which could include the following:
• Review list of archival data
• Develop event invitation list
• Tour facility
• Review facility set-up plan
• Tour program and meet staff
• Review supply list
• Make needed decisions about logic model event
• Materials to be sent out
• Walk planning group through the process of the day
• Define program boundaries so it can be clear at the event what we are logic modeling
• Discuss priorities dictated by mission/vision, etc.
Decide on the target audience for your program logic model with a particular focus on internal and or external partners.
Gathering Archival Data and Organizing Information
The operational definition of gathering and archiving data is the process of
removing selected data items from operational databases that are not expected to be referenced again and storing them in an archive database where they can be retrieved if needed. Thus, databases are not archived, but rather data from databases are archived. As part of the preparation for engaging in the logic model development process, enough archival data should be gathered and organized to facilitate familiarity with the program planning (Trochim, 2006). Archival data that might be appropriate for programs development include:
• Copy of the Strategic Plan for the Company
• Copy of the Company’s By-laws
• Mission/vision statements for the Company
• Feasibility studies and or needs assessment done for the program
• Business / Marketing plan
• Program goals and characteristics
• Project goals, objectives and activities
• Memorandum of agreements
• Promotional material
• Management reports
• Planning documents
• Contract reports
• Evaluation plans
• Evaluation reports
• Regulations
• Guidelines
• Program accomplishments
• Existing logic models
• Conducting Key Informant Interviews
If necessary, the program design team can schedule with people who have specialized knowledge about the topic you wish to understand regarding the organization and or the program operations. Key informants include:
• President of the Board of Directors
• Program s
• Program staff
• Current clients
• Former clients
• Funders
Program modeling decisions
The program modeling framework featuring in this guide can be altered to fit the different needs of the organization and or project context. The program design team will decide on what logic model framework should be used for the program. Some funding sources often take the liberty to suggest which framework must be used in the program design. Additionally, a well informed
decision should be made regarding how to frame the goals, objectives and activities, which will subsequently drive the Work Plan Matrix (Appendix B). A program can have one goal or several goals. Each goal can be described by one objective or several objectives. Each objective should be operationalized by a minimum of three activities, so it can produce an adequate Internal consistency reliability among the subscales. Each activity will be measured as categorical or as continuous variable (Trochim, 2006).
The following table, Table 10 can be used as a checklist to guide program designers and evaluators in the program development phase and implementation process:
Table 10: Checklist for Evaluation Plan
Checklist 1. 2. 3. 4. 5. 6. 7. 8. 9.
Yes The problems to be solved/or issues to be addressed by the planned program are cle There is a specific, clear connection between the identified community needs/asset The breadth of community needs/assets has been identified by expert/practitioner w The desired results/changes in the community and/or vision for the future ultimatel Influential factors have been identified and cited from expert/practitioner wisdom a Change strategies are identified and cited from expert/practitioner wisdom and/or l The connection among known influential factors and broad change strategies has b The assumptions held for how and why identified change strategies should work in There is consensus among stakeholders that the model accurately describes the pro
Evidence-based test for program modeling is the process of assessing if the components of the program design are built on credible research to achieve the best possible outcomes. An evidence-based test does not have to be part of the program modeling development process, although it is recommended as quality models are evidence-based. The program design team will decide on the inclusion of an evidence-based test. If the program design team chooses to use an evidence-based test, the following questions should be addressed:
• Does the program have relevant evidence of efficacy/effectiveness based on methodologically sound evaluation?
• Are the program goals and objectives measurable and appropriate for the intended population?
• Does the program have a clearly stated rationale underlying the program, and alignment of the program’s content and processes with its goals
Based on the answers from these questions, the program design team will discuss if the model they developed is evidence based or if it needs to be strengthened. Furthermore, the program design group should decide whether to set performance standards. A performance standard is a Company’s policy prescribing the conditions that will exist when a satisfactory job is performed. In addition, performance standards should be comprised of measurable behaviors linked to program activities. Many of the performance standard’s components can be extrapolated from the Work Plan Matrix. Furthermore, a comfortable room similar to a conference room set-up with chairs, a large table, video projector and a screen should be made available for the design team (seven to fifteen ) to engage in the construction of the project development model. Each participant should have an individual place card in addition to notepads and access to materials such as: Pencils, extension cord for computer
connections and etc.
Welcome and Introductions
The group facilitator should start the program modeling development meeting by introducing him or herself while explaining his/her role in the group. The facilitator should describe his/her academic and professional credentials and expertise for the purpose of inspiring confidence as a group leader in the program modeling development process. Subsequently, the facilitator should provide all participants the opportunity to introduce themselves. At that time the participants should state their name, positions and their background relating to the project design.
Stakeholder’s role clarification
Following, the facilitator should explain the reason why the participants were invited to be part of the program design committee. It should be outlined that their keen understanding of some essential aspect of the program and input is crucial for the development of a comprehensive program Logic model and it paves the way for ownership of this program by all Company’s stakeholders. Moreover, the facilitators must clarify their role by explaining their function in guiding the participants through the different tasks of the program modeling process and to help them to focus their energy on each task.
Brief Background on program models
The facilitator should provide a brief overview of logic models and program models development process. It is plausible that some of the participants know
more than others about the program modeling development process. Following are some sample questions and additional information that could be used to identify the different skills levels among the participants:
• Who has heard of program modeling and or logic models?
• How many of you has seen a program logic model?
• How many of you has developed or assisted in the development of a program logic model?
It is essential to recognize that this is just a brief background on program models and not everything can be covered at this junction. Nonetheless, participants can significantly increase their knowledge and effectiveness of program modeling process by getting acquainted with the components of the suggested program framework created in this chapter. Have a qualified person give an overview of the problem and the current situation to be addressed. Since the program model development process includes the perspective of relevant stakeholders, it considerably increases the likelihood of the program’s success.
Group boundaries and expectations
The facilitators should explain to the group that some essential ground rules were developed to maximize accomplishments of the program modeling development process, which include:
• Listen for instructions
• Feel free to ask any questions
• Stay focused on the tasks at hand
• Take breaks as needed
• Listen to others’ viewpoints
• Raise your hand when you disagree
• All information discussed are confidential
At this point it is best for the facilitator to take a moment and run through a check on the essential item that should be covered already, which include:
• Everyone has been introduced
• Participants have learned basic information about logic models
• The day’s activities are laid out
• Ground rules are established
Begin Developing the Program Logic Model
The following program framework follows the logic model ideology and it is intended to serve as a guide for program development and evaluation as it draws on the author’s academic and professional experiences. In a world driven by evidence based ability, the framework represents a road map to guide investment in scientifically sound programs.
First and foremost clarify how the program logic model will be used and how it will drive the program implementation and evaluation process. Thus, the program modeling process should begin by identifying the program’s intended results. A clear vision of the program’s results must be expressed as an outcome statement. Subsequently, the framework suggested in this chapter should guide the development process of the subsequent logical model components.
It is very important to clarify how the program logic model will be used and how it will drive the program implementation and evaluation process. Thus, the program modeling process should begin by identifying the program’s intended results. A clear vision of the program’s results must be expressed as an outcome statement. Subsequently, the framework suggested in this chapter should guide the development process of the subsequent logical model components.
This stage begins by focusing on the development of outcomes and impact, which should be expressed by a succinct statement. The completion of the
outcome statement will mark the beginning of the program modeling development process as outlined in the following framework. The following program framework is to be used as an example or outline to assist in the construction of program logic.
Program Framework
NAME OF COMPANY
Month, year
NAME OF PROGRAM
1. PROGRAM 1.1. Program Summary
Please include one or two sentences describing the program and its readiness to meet the unmet needs of the target population.
1.2. Program Purpose
Construct a statement of the overall purpose of the program, which should be no longer than one paragraph. This is similar to a mission statement, however, to avoid confusion, the agency will have only one mission statement- that of the entire agency. Programs will each have a purpose statement. The purpose statement will clearly define the expected outcomes of the program. Example of a program purpose statement:
To provide opportunities for children to live in stable, healthy family environments where siblings can remain together and the healthy growth and development of children is ed and nurtured.
2. SITUATION AND RATIONAL
2.1. Overview
This section focuses on the situation and reasons driving the program. Thus, write a clear statement describing the existing condition that justifies the program or project need in the community. All descriptions should be ed by an extensive review of the current literature addressing the problem in which references are cited appropriately. In the course of developing the following components of the program, ing documentation must be identified to be included in the appendix. Suggestions for annotated bibliography of sources and appendix:
References - List references in APA format alphabetically by author’s last name
Appendix - Include a copy of any actual instruments used to measure program outcomes. Describe the validity and reliability of the instrument.
2.2. Problem Identification
The problem identification is actually ‘seeing’ the problem before trying to solve it. This is the introductory phase of the program development process, as it involves a clear and precise understanding of the problem at hand. It is crucial that the program development team identifies, understands and defines the problem in its entire capacity, as it affects all the subsequent activities involved in the program development process. It is noteworthy that by outlining the intended overall impact will force the program development process to be
focused on the problem to be addressed.
2.3. Causes or Contributing Circumstances (Social, Environmental, Cultural, and Political) To the Problem
Describe the underlying causes, conditions, circumstances and factors responsible for the problem at hand based on an extensive review of current literature.
2.4. Target Population
The target population should be expressed as the number of individuals the program intends to serve, which are often referred as clients. Additionally, discuss market segment strategies, trends and growth. The market segment concept is crucial to market assessment and market strategy. Divide the market into workable market segments (age, income, service / product type, service utilization patterns, client needs, etc.). Explain the segmentation, define the different classifications, and develop as much information as you feel you need about the clients within each market segment group.
Introduce the strategy behind your market segmentation and your choice of target markets. Explain why your program is focusing on these specific target market groups. Explain the following:
What makes these groups more interesting than the other groups that you have ruled out?
Why are the characteristics you specify important?
The most classic market segmentation divides people by demographics (age, income, gender, occupation, education, etc.) or geographics (city, state, county, ZIP code, etc.).
Some of the more recent trends include correlating behavioral patterns and socalled psychographics, which produced the famous classification of “yuppies” and “baby boomers.” Each of these labels actually stands for certain sets of behavior patterns and has some value in segmentation.
2.5. Existing Programs Addressing the Problem In the Target Area
Conduct a search of the existing programs in the target area addressing the problem while seeking to identify possible gaps of services to be bridged by this proposed program. List the three or four main alternative programs and the strengths and weaknesses of each. Consider their service offering, pricing, reputation, management, financial position, brand awareness, business development, technology, or other factors that you feel are important. Furthermore, identify the following:
• In what segments of the market do they operate?
• What seems to be their strategy?
• How much do they impact your program, and what threats and opportunities do
they represent?
2.6. Usage Patterns
Look at the size and concentration of services in this group, the way services are provided, and specific alternatives. Explain the general usage of existing programs, and how the customers seem to choose one provider over another. For example:
• How do people in your target customer group choose between alternative service providers?
• What factors make the most difference for your offerings? Price, or features? Reputation? Image and visibility?
Are brand names important? Or is it simply word of mouth, in which the secret is long-term, satisfied customers?
It is noteworthy that competition might depend on reputation and trends in one part of the market and on channels of distribution and advertising in another. Although, price is vital in services which require a fee, local availability or credentials might be more important for another group of clients.
Note: The U.S. Census website provides free and accurate industry and market data.
2.7. Competitive Edge
You do not have to have the competitive edge to run a successful program (hard work, integrity, and customer satisfaction can substitute for it), but an edge will certainly give you a head start if you need to bring in new grants or sponsorships. The competitive edge of your program should be driven by the awareness of how is your program different from all others. Thus, it is essential to be aware of the alternatives available to your customers. You need to know who’s out there, what they’re offering, and what they’re charging. If you approach this competitive analysis exercise as an opportunity to learn, you may find ways to enhance your own products or services—or at least improve your marketing strategies.
The competitive edge might be different for any given program, even between programs in the same industry. Some typical types of competitive advantages are:
• Price
• Product/services features
• Experienced, credentialized service providers
• Accessibility (closer to customers)
• Aggressive, effective marketing program
• Well-known brand
It’s often useful to show how your competitors’ weaknesses become your program’s advantages/strengths. Use the following table 11 to identify your current competitors while listing Advantages/Strengths.
Table 11: Competitors and Theirs Strengths.
Competitor
Their Advantages/Strengths Your Advantages/Strengths
3. INPUTS 3.1. Overview
Inputs are resources a program uses to achieve program goals and objectives. Examples are staff, volunteers, facilities, equipment, curricula, and money. A program uses inputs to activities related to service delivery (Greenfield et al., 2006). Describe the resources needed to run the program (staff, money, facilities, equipment/supplies, partners, technology etc…)
3.2. Collaborative Partners and Funders
Identify partners in the community that will the program; the strength of the relationship and the agency carrying the proposed program and expected benefits. Succinctly describe the program’s funders and if applicable discuss the main funders given history to this program or similar programs.
3.3. Human Resources (Staffing Plan and Personnel Requirements)
List the number of employees (Salaries), (full time, part-time, volunteer and contract) required credentials, qualifications and responsibilities. Is your team complete, or are there still gaps to be filled? A Job descriptions and logical responsibilities for Key Personnel may be useful to have in place as it is often required by some of the funders. Particularly with new programs, you may not have the complete team as you write the program plan. In that case, be sure to point out the holes and limitations, and how you intend to address them.
3.4. Training Needs
Describe any necessary training to enable employees to maximize program outcomes. A particular focus should be placed on the cost and amount of time employees need for training.
3.5. Program Organizational Chart
An organization chart graphically represents the ‘people’ structure of a system, such as management and non-management employees, within that system. The primary reason we need organization charts is to show the reporting relationship of employees and the workflow between the organization and the program. The following is a suggested organizational chart for non-profit, which is subjective to modifications to fit program needs.
Figure 9: Organizational Chart
3.6. Definition of a Client
Provide a detailed description of what comprises a client in the context of the program being developed (e.g. an individual, a family, a company, community).
3.7. How Clients Will Be Identified and Brought Into the Program
Explain how services are marketed, how clients are recruited and itted into the program. Discuss marketing strategy to promote program services, normally involves focusing on a target market, emphasis on certain services or media, or ways to uniquely position the program. A good marketing strategy depends a great deal on which market segments have been chosen as target market groups for proposed program. Additional considerations must be given to media strategy, organizational development, or other factors.
3.8. Description of Services
Describe the services to be provided by the proposed program. Describe what you offer and any plans for future offerings.
3.9. Frequency of Services to Be Provided
Explain how often services are provided to each client and the average length of services.
3.10. Where Are The Services Delivered?
Provide a succinct description of the facility where services are to be delivered.
3.11. Criteria of Inclusion for Service Provision
Describe the characteristics that qualify a person to receive program services (age, gender, income, county of residence, etc.).
3.12. Definition of Service Unit
Provide the number of unit of services offered by the program, which often reflect the amount of hours spent on implementation, evaluation and reporting outcomes. Unit of Service Information Example: Each client will be receiving an average of 3 hours of Program Implementation, 2 hours Program Evaluation and 1 hour for reporting the results. One service unit equals to one hour of service (1 service unit= 1 hour of service). Each client will be receiving 6 units of services over a period of one year.
The Implementation part will be comprised of:
• Psycho-social assessments and follow up for clients and their families.
• Formulating health maintenance plan for each client aligned with a community primary care “medical home model”.
• Conducting group stress management sessions for clients and families.
• Conducting advocacy for clients and their families.
• Connecting clients with medical, financial and social services.
• Conducting Group Meetings
• The evaluation and report part consists of the following:
• istering evaluation instruments to clients
• Entering information into the Statistical Package for Social Science (SPSS).
• Conduct analysis of the data
• Write report of the findings
3.13. Service Unit Cost
Provide the cost for unit of services, how many units of services will be provided for each client while considering this to be a major component on the budget part of the program proposed.
3.14. Budget (Revenues)
Describe the revenue sources and / or fundraising strategies for the proposed program. Fundraising strategies deal with how and when to:
• Close fundraising prospects
• Compensate staff
• Optimize order processing and database management
• Maneuver price, delivery and conditions.
The revenue you generate to run your program is often the result of your fundraising strategy, which depends a great deal on which market segments you have chosen as target market groups. Briefly discuss strategies for optimizing methods of fundraising.
3.15. Program Costs
Describe all “direct” program and “istrative” costs incurred by the proposed program. Using the following table, Table 12 will help to frame the program costs:
Table 12: Program Costs.
PROGRAM COSTS Total for Program Total of Unduplicated Clients Served Cost per Client Units of Service per Client Cost per Unit of Service
$$ $$ $$
ADITIONAL BUDGET INFORMATION Table 13: Program Revenue.
REVENUE Category Agency X % Total Agency Y % Total Agency Z Program Fees Program Grants Program Donations General Agency Revenue Transfer from Temp. Restricted TOTAL REVENUE
Table 14: Program Expenses.
EXPENSES Category Agency X % Total Agency Y % Total Personnel Operational Occupancy Service Specific Miscellaneous Program Overhead (Over 15%, explain) Total Expenses
4. ACTIVITIES 4.1. Overview
As a nonprofit driven program a particular focus should be placed on measurement and outcomes. Thus, the construction of goals objectives and activities should be done in an observable measurable manner. Goals are broad outcomes statements operationalized by reasonably time framed objectives, which subsequently are operationalized by program activities.
4.2. Program activities:
Activities are what a program does with its inputs—the services it provides—to fulfill its mission. Examples are:
Utilize the power of partnership and collaboration to provide medical home, comprehensive medical care and follow-up to SCD children.
Provide medical home transitioning for young client moving from pediatric care to adult services.
The following is a checklist to insure readiness for program development and implementation.
Table 15: Readiness For Program Development and Implementation.
Checklist 1. 2. 3. 4. 5.
Yes No Yet Major activities needed to implement the program are listed. Activities are clearly connected to the specified program theory. Major resources needed to implement the program are listed. Resources match the type of program. All activities have sufficient and appropriate resources.
5. OUTPUTS 5.1. Overview
Outputs are the directed and measurable products of a programs activities and services often expressed in of units (hours, number of people you hope to reach and actions completed). Outputs are important to track as evaluation approaches are often focused on measuring what programs produces. Typically, evaluators monitor measures known as outputs to document the amount, quality, or volume of use of the project’s products or services.
Examples of program outputs:
• Number of service units provided
• Number of clients reached
• Number of clients referred to medical home
• Number funding proposal submitted
• More than one output is necessary to produce a final outcome.
6. OUTCOMES
6.1. Overview
Outcomes are the results of program activities often expressed in of increase in desired attitudes and behaviors of participants. From a logic model perspective, outcomes refer to goals, objectives and activities. Each program should have between 1-5 outcomes (no more.) The outcomes indicate when the program has achieved its goal(s) and are measured by the Outputs. A single outcome is often the result of multiple outputs. This topic is the best place to put emphasis on outcomes and measurements as part of your plan. Your goals should be described by measurable objectives using activities as indicators.
6.2. Goals
Goals describe expected outcomes while providing programmatic direction. They focus on results rather than process. Construct observable program goals as having one or more measurable objectives to be achieved within a timeframe.
Example of goals would be:
Goal 1: To provide a case management response to meet unmet needs of150 children with diabetes as they transition from pediatric to adult health care settings.
Goal 2: Provide medical home referrals for children with diabetes.
6.3. Objectives
Objectives are clear, specific, measurable, and time-limited statements of action describing how to meet a goal. Objectives are generally of two types:
Outcome objectives: address ends to be obtained
Process objectives: specify the means to achieve the outcome objectives.
Example of objectives would be:
To meet the immediate social/medical needs of 75% of 150 children with diabetes.
By June (year), the project will have referred 500 children with diabetes to the Children’s Medical Center and local medical providers.
6.4. How did you set your target?
A target objective is defined as the purpose toward which an endeavor is directed. Provide a description of how the target objective for your program will be set. Example: The set target was based on the national trend of needs and gaps in services identified by the American Diabetes Association and The Children Hospital of many states in the US. Furthermore, the Children’s Hospital who focuses on pediatric care for children with diabetes is transitioning 240 children from their program to adult services. Many of these transitioning individuals will fall through the net of services due to the lack of guidance to
connect them with existing resources. As a complementary effort to bridge gaps in services, this case management program will continue to work in conjunction with Children’s Hospital to connect patients with diabetes who are transitioning adolescents from pediatric care to adult services.
The target-setting process for a typical program includes goals and objectives, which could look like the following:
GOAL #1: Increase awareness and education within the diverse populations affected by diabetes.
GOAL #1: Objective #1: By June (year), the project will have developed and implemented community outreach strategies that will keep a minimum of 500,000 persons informed about diabetes and genetic inheritance patterns.
Activity to be performed to achieve objective: Disseminate information about diabetes through health fairs, face to face , TV, radio, newspapers, fliers, billboards, signs promoting diabetes and genetic literacy, and wrapping buses with ment or messages that promote diabetes literacy.
The following table provides an ease to follow guideline to identify the presence/absence of necessary elements for the program design and implementation:
Table 16: Readiness For Program Design
Checklist 1. 2. 3. 4. 5.
Yes Target participants and/or partners are described and quantified as outputs (e.g. 100 Events, products, or services listed are described as outputs in of a treatment The intensity of the intervention or treatment is appropriate for the type of participa The duration of the intervention or treatment is appropriate for the type of participa All activities have sufficient and appropriate resources. Outcomes reflect reasonable, progressive steps that participants can make toward l Outcomes address awareness, attitudes, perceptions, knowledge, skills, and/ or beh Outcomes are within the scope of the program’s control or sphere of reasonable inf It seems fair or reasonable to hold the program able for the outcomes speci The outcomes are specific, measurable, action-oriented, realistic, and timed. The outcomes are written as change statements (e.g. things increase, decrease, or st The outcomes are achievable within the funding and reporting periods specified. The impact, as specified, is not beyond the scope of the program to achieve.
At the time of setting the target outcome, little or nothing is known about final results. Thus, an outcome measurement plan & analysis is critical, which should include a detailed description of:
• Measurement tools
• Data collection
• Data analysis
6.5. Measurement instrument
Name of the instrument (include copy of the instrument in appendix)
How long have you used this measurement instrument?
Where did you get this tool / how did you develop it (If you are using a measurement instrument designed by someone else, state the source of the instrument).
Describe the Validity / reliability of the instrument (Cronbach’s Alpha)
6.6. Data collection
Provide a description of collection of project data from each core services for analysis and dissemination. Data are collected directly from all clients enrolled in the program. Clients will complete standardized questionnaires, available in face-to-face, mailed paper- or web-based forms.
6.7. Data analysis
Explain how the data collected will be entered into a database on a periodic basis. For example:
The data analysis include a careful collection of quantitative data using forms, surveys and aggregated data from past measurements to generate descriptive statistics. Subsequently, the data collected will be entered into Statistical Package for Social Sciences (SPSS) and used to assess strengths, limitations and accomplishments as the program tactically advances towards the fulfilment of its commitment with the present grant. The measures for evaluation were based on the activities of the project reflecting goals and objectives linked to the logic model and detailed in the Plan Matrix. At the end, the program design team will meet again to review the program model draft.
7. REFERENCES Alkin, M. C. (1973). Evaluation theory development. In B. R. Worthen & J. R. Sanders (Eds.), Educational Evaluation: Theory and Practice (pp. 150-155). Belmont, California: Wadsworth Publishing Company, Inc.
Abernathy, D. J. (1999). Thinking outside the evaluation box. Training and Development, 53(2), 18(6).
Bennett, C. & Rockwell, K. (1995). Targeting outcomes of programs (TOP): An integrated approach to planning and evaluation. Unpublished manuscript. Lincoln, NE: University of Nebraska.
Bickman, L. (2000). “Summing Up Program Theory.” New Directions for Evaluation 87: 103-112.
Bickman, L. (Ed.). (1987). Using program theory in evaluation. New Directions for Program Evaluation Series (no. 33). San Francisco: Jossey-Bass.
Boulmetis, J. & Dutwin, P. (2005). The ABCs of evaluation: Timeless techniques for program and project managers (2nd ed.). San Francisco, CA: Jossey-Bass
Bowie, N. (1988). The moral obligations of multinational corporations. In S. Luper-Foy (Ed.), Problems of international justice: 97-113. Boulder, CO: Westview Press.
Braden, R. A. (1992). Formative evaluation: A revised descriptive theory and a prescriptive model. Paper presented at the Association for Educational Communications and Technology (AECT).
Byrne, B. (2001). Structural equation modeling with AMOS: Basic concepts, applications, and programming. London: Lawrence Erlbaum.
Carmines, E.G., & Zeller, R.A. (1979). Reliability and validity assessment. Quantitative Applications in the Social Sciences, 17. Thousand Oaks, CA: Sage Publications, Inc.
Carroll, A. (1993). Business and society: Ethics and stakeholder management. Cincinnati: South-Western Publishing.
Chen, H. (1990). Theory Driven Evaluations. Newbury, CA, Sage.
Clarkson, M.B.E. (1995). A stakeholder framework for analyzing and evaluating corporate social performance. Academy of Management Review, 20(1): 92-117.
Cooksy, Leslie J., Gill, Paige, & Kelly, P. Adam, (2001). The Program Logic Model as an Integrative Framework for a Multimethod Evaluation, Evaluation and Program Planning, 24(2), 119-128.
Cornell, B., & Shapiro, A.C. (1987). Corporate stakeholders and corporate finance. Financial Management, 16: 5-14.
Cronbach, L. J. (1975). Course improvement through evaluation. In D. A. M. Payne, R. F. (Ed.), Educational and psychological measurement: Contributions to theory and practice (2nd ed., pp. 243-256). Morristown, N.J.: General Learning Press.
Dick, W., Carey, L., & Carey, J. O. (2001). The Systematic Design of Instruction (5th ed.).New York: Longman.
Freeman, E. (2007). “Ending the so-called “Friedman-Freeman” debate.” In Dialogue towards Superior Stakeholder Theory, 2007 national meeting of the Academy of Management, featuring an “All-Academy” symposium on the future of stakeholder theorizing in business, offered as a dialogue by the Business Ethics Quarterly.
Freeman, E. (1984). Strategic Management: A Stakeholder Approach. Pitman Publishing. Boston.
Freeman, R.E., & Reed, D.L. (1983). Stockholders and stakeholders: A new perspective on corporate governance. California Management Review, 25(3): 9394.
Friedman, A., & Miles, S. (2006). “Stakeholders: Theory and Practice.” Oxford University Press, New York.
Fitz-Gibbon, C. T. & L. L. Morris (1975). “Theory-Based Evaluation.” evaluation comment 5(1).
Glaser, B. G. & A. L. Strauss (1967). The Discovery of Grounded Theory: Strategies for
Qualitative Research. Chicago, Aldine Publishing Company.
Gibson, K. (2000). The moral basis of stakeholder theory. Journal of Business Ethics, 26(3):
245-257.
Greenfield, V., Williams, V. & Eiseman, E. (2006). Using Logic Models for Strategic Planning and Evaluation, Santa Monica, Calif.: RAND Corporation, TR-370-NCIPC.
Hacsi, T. A. (2000). “Using Program Theory to Replicate Successful Programs.” New Directions for Evaluation 87: 71-78.
Hannafin, M. J., & Peck, K. L. (1988). The design, development, and evaluation of instructional software. New York: Macmillan Publishing Company.
Hanushek, E., & Rivkin, S. (2007). “Pay, working conditions, and teacher quality.” Future of Children 17, no. 1 (Spring): 69-86.
Hatry, H. (1999). Performance Measurement: Getting Results. Washington, DC: The Urban Institute Press.
Hines, J., Hungerford, H., & Tomera, A. (1986). Analysis and synthesis of research on responsible environmental behavior: A meta-analysis. Journal of Environmental Education, 18(2), 1-8.
House, E. R. (1980). Evaluating with Validity. Beverly Hills: Sage Publications.
House, E. R. (2003). “Stakeholder Bias.” New Directions for Evaluation 97: 5356.
Huebner, T. A. (2000). “Theory-Based Evaluation: Gaining a Shared Understanding Between School Staff and Evaluators.” New Directions for Evaluation 87: 79-89.
Joreskog, K. (1993). Testing structural equation models. In K. A. Bollen & J. S. Logn (Eds.), Testing structural equation models (pp. 294-316). Newbury Park, CA: Sage.
Kaplan, D. (1995). Statistical power in structural equation modeling. In R. H. Hoyle (ed.), Structural Equation Modeling: Concepts Issues, and Applications (pp. 100-117). Newbury Park, CA: Sage Publications, Inc.
Kinnaman, D. E. (1992). How to evaluate your technology program. Technology and Learning, 12(7), 12(5).
Kline, R. (1998). Principles and practice of structural equation modeling. NY:
Guilford Press.
Kochan, T. & Rubinstein, S. (2000). Toward a Stakeholder Theory of the Firm: The Saturn Partnership, Organization Science, 11 (4): 367-386.
McClintock, C. (2004). Using Narrative Methods to Link Program Evaluation and Organizational Development. The Evaluation Exchange. IX: 14-15.
McLaughlin, J. A. and J. B. Jordan (1999). “Logic Models: A Tool for Telling Your Program’s Performance Story.” Evaluation and Program Planning 22(1): 65-72.
McLarney, C. (2002). Stepping into the light: Stakeholder impact on competitive adaptation. Journal of Organizational Change, 15(3): 255-272.
Mellahi, K., & Wood, G. (2003). The role and potential of stakeholders in “hollow participation”: Conventional stakeholder theory and institutionalist alternatives. Business and Society Review, 108(2): 183-202.
Mitroff, I.I. (1983). Stakeholders of the Organizational Mind. San Francisco: Jossey-Bass.
Nasi, J. (1995). What is stakeholder thinking? A snapshot of a social theory of the firm. In J.
Nasi (Ed.), Understanding stakeholder thinking: 19-32. Helsinki: LSR-Julkaisut Oy.
OPERATIONS POLICY DEPARTMENT (1996). Performance monitoring indicators a handbook for task managers. Washington, D.C.: World Bank.
O’ Sullivan, R. G. (2004). Practicing Evaluation: A Collaborative Approach. Thousand Oaks, CA, Sage Publications.
Pace, C. R., & Friedlander, J. (1978). Approaches to evaluation: Models and perspectives. In G. R. Hanson (Ed.), New Directions for Student Services (pp. 117). San Francisco: Jossey Inc.
Patton, M. Q. (1989). “A Context and Boundaries for a Theory-Driven Approach to Validity.” Evaluation and Program Planning 12: 375-377.
Pedhazur, E. & Pedhazur, L. (1991). Measurement, Design and Analysis. Lawrence Eblaum Associaltes, Publishers: Hillsdale, New Jersey.
Popham, J. (1993). Educational evaluation. San Francisco, CA: Jossey-Bass.
Rockwell, K., & Bennett, C. (2004). Targeting outcomes of programs: A hierarchy for targeting outcomes and evaluating their achievement. Faculty publications: Agricultural Leadership, Education & Communication Department. Retrieved December 2, 2011, from http://digitalcommons.unl.edu/aglecfaub/48/.
Rossi, P., Howard, E. Feeman & Mark, L. (1999). Evaluation: A Systematic Approach. 6th ed. Newberry Park, CA: Sage Publications.
Schumacker, R. Lomax, R. (2004). A Beginner’s Guide to Structural Equation Modeling 2nd Ed. Mahwah, NJ: Lawrence Erlbaum.
Schray, V, (2006). A National Dialogue: The Secretary of Education’s Commission on the Future of Higher Education, Issue Paper, “ability/ Assessment.” Washington, D. C.: U. S. Department of Education.
Scriven, M. (1991) Evaluation thesaurus (4th. Edition). Thousand Oaks, CA: Sage Publications, Inc.
Scriven, M. (1973). The methodology of evaluation. In B. R. Worthen & J. R. Sanders (Eds.), Educational evaluation: Theory and practice (pp. 60-106). Belmont, California: Wadsworth Publishing Company, Inc.
Sharfman, M. P. & Fernando, C. (2008). Environmental risk management and the cost of capital. Strategic Management Journal, 29: 569-592.
Spaulding, D. T. (2008). Program evaluation in practice: Core concepts and examples for discussion and analysis. San Francisco, CA: Jossey-Bass.
Stake, R.E. (1976). Evaluating educational programmes: The need and the response. Paris, : Organisation for Economic Cooperation and
Development.
Stevens, F., Lawrenz, F., & Sharp, L. (1997). -friendly handbook for project evaluation: Science, mathematics, engineering, and technology education. Arlington, VA: National Science Foundation.
Suchman, E. A. (1967). Evaluative Research: Principles and Practice in Public Services and Social Action Programs. New York, Russell Sage Foundation.
Taylor-Powell, E. (2005). Logic Models: A Framework for Program Planning and Evaluation, University of Wisconsin-Extension-Cooperative Extension, (Slide Presentation) Nutrition Food Safety and Health Conference, Baltimore, Maryland.
TenBrink, T. D. (1974). The evaluation process: A model for classroom teachers, Evaluation: A practical guide for teachers (pp. 2-19). New York: McGraw-Hill.
Tessmer, M. (1993). Planning and conducting formative evaluations. London: Kogan Page.
Trochim, W. (1985). Pattern matching, validity, and conceptualization in program evaluation. Evaluation Review, 9, 5, 575-604
Trochim, William M. (2006). The Research Methods Knowledge Base, 2nd Edition. Retrieved April 13, 2010, from:http://www.socialresearchmethods.net/kb/.
Valdez, G. E. (2000b). Evaluation Design and Tools, [Online]. North Central Regional Educational Laboratory. Available: http://www.ncrel.org/tandl/eval2.htm [2000, December 9].
W.K. Kellogg Foundation (2001). Logic model development guide: Using logic models to bring together planning, evaluation, & action. Retrieved April 13, 2010, from: http://ww2.wkkf.org/ default.aspx? tabid=101&CID=281&CatID=281&ItemID=2813669&NID=20&LanguageID=0
W. K. Kellogg Foundation (2004). Logic Model Development Guide. Retrieved March 2011 from http://www.wkkf.org/knowledgecenter/resources/2006/02/WK-Kellogg-Foundation-Logic-Model-DevelopmentGuide.aspx
Weiss, C. H. (1972). Evaluation Research: Methods for Assessing Program Effectiveness.Englewood Cliffs, NJ, Prentice Hall.
Weiss, C. H. (2000). “Which Links in Which Theories Shall We Evaluate?” New Directions for Evaluation 87: 35-45.
Wholey, J. S. (1987). “Evaluability Assessment: Developing Program Theory.” New Directions for Program Evaluation 33: 77-92.
Wholey, J. S. (1979). Evaluation: Promise and Performance. Washington, D.C., The Urban Institute.
Wood, J., & Jones, E. (1995). Stakeholder mismatching: A theoretical problem in empirical research on corporate social
Worthen, B.R., & Sanders, J. R. (1987). Educational evaluation: Alternative approaches and practical guidelines. New York: Longman.
Appendix A
Overview
The diagram displayed in the next page shows an outcome sequence model designed to describe process and outcomes. In addition to the diagram, logic models can include a narrative that explains the relationships between these components. Fully-specified logic models also identify the external factors that can hinder the efforts of program staff or help them achieve the program’s objectives (Rockwell and Bennett, 2004).
Figure 10: Logic Model (example)
Example of a Logic Model & Narrative
The Program logic model on the previous page includes the following assumptions:
Pervasive lack of awareness of sickle cell risk among the target population contributes to the lack of community to improve financial resources and difficult social and economic realities of those impacted by this devastating disease.
The African American community is disproportionately affected by the adverse consequences of sickle cell disorders and this disparity needs to be addressed within the framework of increasing awareness and education by:
• Screening & testing
• Identification of sickle cell trait/disease status
• Notification
• Genetic counseling
• Case management
Therefore, improving SCD and trait notification, increasing SCD knowledge among patients and providers and increasing public knowledge of the risk of sickle cell disorders among African Americans while using the power of partnership can serve as catalysts for change.
Follow-up of families of children with hemoglobin traits in North Texas is a issue of great concern as there is no aggressive program due to lack of legislative funding authority to notification, provision of extended family testing, family education and genetic counseling needed for informed family planning decisions that will contribute to breaking sickle cell cycle.
Every child with SCD deserves a medical home and fostering partnerships with Southwestern Comprehensive Sickle Cell Center, primary care providers and area hospitals can improve outcomes.
Refinements were made to the model with special attention to the desired impact of the project, which is to improve health and quality of life outcomes for children/families diagnosed with SCD or traits, and to empower at-risk population through genetic education and access to resources. Project goals have been specifically designed and modified to increase achievability while maintaining alignment to the project’s priorities. Changes in the logic model reflect the changes made on the goals and objectives. Explanations reflecting the specific changes are recorded on the enclosed technical report document entitled Performance Evaluation of Goals and found as a last paragraph in each section each goal and objective and it is entitled: Where and why adjustments were made.
Project tasks, activities and services to be performed and the liaisons required in order to achieve stated objectives are briefly described in the work plan matrix. Efforts were expended in the project design to avoid service duplication,
inefficiency and to specifically promote service coordination, integration, and infrastructure development at local, state and national levels. The project is cognizant of state and national efforts to integrate genetic services into the current health care delivery system and to develop data gathering, sharing and using of genetic information.
Since January 2007, the … (NAME OF YOUR ORGANIZATION) has reached over 400,000 persons in the Dallas and Fort Worth area with sickle cell informational material, which means that we are exceeding our target objectives. In an effort to improve effectiveness and efficiency, however, The SCDAD is endeavoring to strengthen existing partnership with Baylor Medical Center, University of Texas at Dallas and the Sickle Cell Disease Research Center and the University of Texas at Arlington. Thus, senior executives of … (NAME OF YOUR ORGANIZATION) met with directors of the Sickle Cell Disease Research Center, Baylor Medical Center, the University of Texas at Arlington to identify possible gaps of sickle cell services in the Dallas area. Subsequently, strategies are being forged to meet unmet needs while making a contribution to the existing body of knowledge and theory through applied research and services. These partnerships will raise the credibility of the … (NAME OF YOUR ORGANIZATION) within the medical and social services community and maximize our capacity for project implementation especially in areas linked to outreach and education.
Furthermore, the … (NAME OF YOUR ORGANIZATION) is exploring educational opportunities and potential partnerships with the Martin Luther King, Jr. Family Clinic and Los Barrios Unidos Community Clinic, which are charged with providing primary and preventative health care to underserved populations. These entities are Federally Qualified Health Centers (FQHC) located in the DFW area. In addition, the … (NAME OF YOUR ORGANIZATION) is pursuing a partnership with the local minority professional coalition, CAAPCO (Coalition of African American Professional Community Organizations to maximize opportunities to raise awareness about SCD, secure volunteers, lobby for legislative change about trait status notification, and open doors to potential donors.
Since January 2007 the … (NAME OF YOUR ORGANIZATION) screened and tested over 950 persons and based on the monthly outcome patterns of previous years we are anticipating over 500 persons to be screened in the months of September, October and November.
In an effort of maximizing screening outcomes, the … (NAME OF YOUR ORGANIZATION) has conducted a descriptive analysis of data linked to the project resulting in information that increases effectiveness and efficiency in the program design. Thus, the … (NAME OF YOUR ORGANIZATION) has developed a method for targeting screening events where the at-risk population is most likely to be reached. In addition, the … (NAME OF YOUR ORGANIZATION) is partnering with local health departments to increase awareness about the need for trait screening and genetic counseling, which may create opportunities to share costs, facilities, and other resources.
The … (NAME OF YOUR ORGANIZATION) offices are located in the same building as a local state senator, Royce West. Subsequently, the … (NAME OF YOUR ORGANIZATION) has capitalized on the relationship with Senator West to reintroduce to the State Senate a better formulated policy proposal that would amend current legislation regarding newborn screening for SCD. The proposed policy will require notification of parents of test results related to sickle cell trait, and authorize the state health department to provide access to certain services and programs for children with special health care needs. The … (NAME OF YOUR ORGANIZATION) is continuing its efforts to lobby for legislative change to increase trait status notification as well as state-funded services for SC trait carriers and their families.
Appendix B
Work Plan Matrix
Overview
The following work plan matrix is a framework provided as an example to assist in the development of your own project Matrix. In addition, the Complete Work Plan Matrix below (Page 115) can be used as a guideline to organize and track the progress toward the accomplishment of goals, objectives and activities associated with the logic model depicted above .
Table 17: Work Plan Matrix.
Overall Outcomes sought: Goal, Priority Area 1: Objective 1.1: How objective is linked to the goal: Steps/Activities Person Responsible Dates Indicator(s)/ Measures Strategies Activity 1.1: a Activity 1.1: b Activity 1.1: c Objective 1.2: How objective is linked to the goal: Steps/Activities Person Responsible Dates Indicator(s)/ Measures Strategies Activity 1.2: a
Activity 1.1: b Activity 1.1: c
Table 18: Complete Work Plan Matrix
Overall Outcomes sought: 1) Increase awareness and education within the diverse population Goal, Priority Area 1: Community Outreach Linked Performance Measure for Priority Area 1: Conduct effective community outreach acti Objective 1.1: By June (year), the project will have developed and implemented community o Activities Activity 1.1 Disseminate sickle cell disease and trait information through website, health fairs Objective 1:2: By June 2020, the project will have increased knowledge of SCD inheritance p Activities Activity 1.1 Educate families about sickle cell inheritance pattern in their family
Overall Outcomes sought: 2) Reduce adverse impact of sickle cell disease on at- risk populati Goal, Priority Area 1: Provide screening, testing and notification Linked Performance Measure for Priority Area 1: Conduct activities to determine sickle cell t Objective 2.1: By June 2011, the project will have increased knowledge of sickle trait status o Activities Activity 2.1: a Obtain informed written consent, take blood samples Activity 2.1:b Forward samples to testing lab at Children’s Medical Center Dallas for determi Activity 2.1:c test results looking for the presence or absence of abnormal hemoglobin Activity 2.1:d Notify client of results Activity 2.1:e Explanation of hemoglobin report results
Objective 2.2: By June 20.., the project will have provided genetic counseling to a minimum Activities Activity 2.1 a: Determine sickle cell trait status and increase knowledge of trait inheritance pa Objective 2.3: The project will have increased sponsorship of blood donation drives for use b Activities Activity 2.2 a: Utilize the power of partnership with Carter BloodCare and the Red Cross to c Activity 2.2 b: In partnership with Carter Bloodcare, the SCDAD will put up posters and ban Activity 2.2 c: In partnership with Carter Bloodcare, the SCDAD will provide sign up sheet f
Overall Outcomes sought: Provide medical home referrals for SCD children. (According to th Goal, Priority Area 1: Referrals to medical home Linked Performance Measure for Priority Area 1: promote awareness of and increase access o Objective 3.1: By June 2011, the project will have referred 500 SCD children to the Southwe Activities Activity 3.1: a Utilize the power of partnership with the Martin Luther King Jr. and collabora Activity 3.1: b: In collaboration with the Southwestern Comprehensive Sickle Cell Center in
Logic Model Example
EVALUATION PLAN Outcomes Increase sense of industry and competency
Indicators 80% of school-age Street Outre 80% of clients will re-enter sch Strengthen feelings of connectedness to others and to society 80% of families of clients* will 80% of families of clients* will Strengthened belief in their control over their fate in life 95% of clients will discharge to 80% of clients will not run awa Increased awareness of personal identity 80% of clients will report incre
10. Appendix C
10.1 Client Story