After the evaluator has determined the type of information available, the next
step is to develop a list of information still needed, and the best way to collect it. This is
relevant not only to the evaluability assessment, but also to future process and outcome
Step 5 includes lists of questions about each of the five aspects of the program.
In the boxes are illustrations about how these types of questions might pertain to the
program under evaluation. The evaluator will need to decide how much of this
information is relevant as well as any additional information to be included.
Questions about Clients
These questions help determine who are the actual and intended program
participants (refer also to Section 3, Subsection "Target Population").
- Who is the target audience of the program?
- Is the program designed to have community-wide penetration?
- How are participants referred?
- Do participants differ in systematic ways from non-participants?
One purpose of an evaluability assessment is to determine whether the program
is reaching its target population. In conducting the evaluation, the evaluator must first
determine if there is agreement about who is in this population. For example: All first-
time parents? Only families who are identified as being at-risk? Only families that fall
within selected categories and subcategories?
Second, the evaluator must determine whether prospective clients are being
selected in a systematic way. Are there specified inclusion and exclusion criteria? Is
there a plan for client recruitment (e.g., all families with young children, all families with
first births that fall within a specified period)?
Finally, the evaluator will need to consider whether a systematic plan currently
exists for handling refusals. Is there a protocol for "passive refusals" (i.e., those who
never return telephone call or other attempts to contact)? How many failed attempts to
contact are necessary before the family is excluded? What are the demographic
characteristics of refusals and do they differ from characteristics of study participants?
If such data are unavailable then monitoring whether the program is reaching its
target population might be considered as a type of process evaluation included as part
of the evaluation approach. This analysis would permit the adjustment of recruitment
techniques if it has been determined that an important sub-group of the target
population is not being reached.
Questions about Program Model
Questions in this section inform the evaluator about the intended model or
blueprint for the program under evaluation including goals, objectives, activities and
linkages among them. In order to develop evaluation questions for both process and
outcome evaluations, program goals and objectives must be clearly identified. In the
evaluation context, the terms "goal" and "objective" are not always clearly differentiated,
and are often used interchangeably. However, a distinction is made between these
terms as follows:
Goal: A general statement summarizing the intended benefits of implementing
the program. Program goals encompass cognitive, behavioral and attitudinal
changes in clients (Herman, Morris, & Fitz-Gibbon, 1987). A program may
have multiple goals, and each goal is likely to have multiple objectives.
Objective: Objectives are the operationalization of goals, and the intended
measurable results that clients strive to achieve. Objectives are more specific
than goals, and they describe strategies employed in accomplishing goals.
Objectives are often stated in terms of increase/decrease by a specific
number or percent within a given time. For example: "Objective is to reduce
child maltreatment reports by 30% in the next 5 years."
- What is the program intended to do for the people it serves?
- Have formal goals and objectives been identified?
- What are they?
- Are there clearly defined linkages between the program's theoretical
framework, activities, and objectives?
Example 1.8: Review of USAF-Family Advocacy Program Prevention Goals &
The UNH team's analysis of the H.O.M.E.S. outreach program did not find clear
linkages between theory, activities, and objectives. Objectives were far-reaching given a
low intensity of services (e.g., decrease incidence of negative teenage behaviors:
suicide, chemical dependency, physical fighting, arrests, runaways, truancy, assaults,
and related health contracts). In addition, there was a lack of agreement in the field that
the primary goal of H.O.M.E.S. outreach is prevention of child and spouse
Nevertheless, objectives for this USAF-FAP Program were clearly stated (though not in
numerical terms), consistent with practice, and potentially achievable (e.g., increase
parental knowledge, decrease abuse potential, decrease levels of stress).
Questions about Process
Process questions focus on program activities and services, including service
intensity and length. (Process evaluation is also the focus of Section 3.)
- What types of activities are there?
- What is the schedule of activities?
- Are the actual program activities the same as those initially specified in the
- Is there a prescribed number of contacts with families? Are the projected
goals being met?
- What is the duration of each program for participants?
- What are the perceived barriers and challenges to intervention that are faced
During a process evaluation, the actual delivery of services is evaluated. The
evaluator is concerned about whether each family is receiving roughly the same
treatment. Issues to consider include:
- How are assessment tools being used?
- Are forms incomplete? Are forms "user-friendly?"
- Is the program realistic in terms of the number of contacts that must take
- Is there a procedure for handling missed visits?
- Are there mechanisms for tracking "dose" effects due to different levels of
- Do field personnel have a clear understanding of what to do and how to
administer the prescribed program in a consistent way?
- Are there regular staff trainings and spot-checks on inter-rater reliability?
These questions are important early on in an evaluability assessment, and
can head off trouble. They can also become part of an on-going process
evaluation (discussed in more detail in Section 3, Subsection "Monitoring
Delivery of Services").
Example 1.9: Evaluating a Parenting Class
As part of a primary prevention effort, program staff may decide to offer one or more
parenting classes for all parents of infants and/or young children. A process evaluation
of a primary-prevention program is similar to that of a secondary-prevention program. It
will be important for the evaluator to determine which population the program is
reaching and whether the curriculum is being delivered consistently. The evaluator will
need to know how the program is being marketed, what the specific learning objectives
are, and if the course that is being evaluated matches the model. Perhaps most
importantly, the evaluator will want to determine if a previously developed ("canned")
course translates well to the population being served. When sites modify a program,
these modifications could indicate that the program as originally developed did not
translate well to the specific population (see Section 3, Subsection "Program Activities").
Questions about the Organization of Services
In this section are questions about the organizational structure of the program
and the program's variability across sites.
- Where are the services provided?
- Who provides the services?
- How large is the staff and is there adequate coverage?
- How and by whom is the program resourced?
- How is the program administered?
- What is the organization hierarchy?
- How do programs vary across sites in implementation of the model?
- At what point are families becoming involved in the programs?
Example 1.10: Organization of Services
Questions under this heading are perhaps the most straightforward. Much of this
information can be gathered from existing records, and is related to assessment of inter-
site differences. Differences between sites are almost inevitable, but good data
collection at this point will help explain why these differences exist.
One crucial aspect is whether there is adequate staffing to deliver the intended
program. The evaluator may encounter a situation where field workers are assigned too
many cases to adequately provide the services specified in the design (e.g., a certain
number of home visits). In this situation, more staff may need to be hired or the model
may have to be modified. Also, it is important to determine whether staff are supported
in other ways including adequate training and feedback when they encounter situations
that call for extra care. For example, is there a mechanism for group case-management
or are all decisions made from the top down?
Questions about Outcomes
Outcome questions include intended program effects, observable program effects
and available measures of program effects (See also Section 5, Subsection "Evaluation
- Is the program's treatment clearly identifiable and consistent?
- Are the outcomes clear, specific, and measurable?
- What types of outcome measures are being used across programs and
- What is the availability of outcome data?
Example1.11: Outcomes for the USAF-FAP First Time Parents (FTP) Program
The University of New Hampshire evaluability assessment team identified some
limitations in the collection of outcome measures used for the families. For example,
many sites were not collecting post-program Child Abuse Potential (CAP) assessments.
Because military families are very mobile on quite short notice, many of the families
were lost to follow-up, an issue that needs addressing during a process evaluation. In
addition, there were no measures of family strengths or family gains.
At this early phase, the goal of the evaluator is to make certain that outcomes
can be assessed. Are there good theoretical reasons for the selection of variables?
Does everyone agree on what those important variables are? What steps were taken to
limit bias in data collection? Are data on critical variables even being collected, not to
mention being collected correctly?