Journal Issue: Home Visiting: Recent Program Evaluations Volume 9 Number 1 Spring/Summer 1999
What Services Did the Program Provide?
Answering this question can provide information that can be used both to improve a program and to interpret the results of evaluations focused on program effectiveness. For example, if an evaluation suggests that services are not being delivered as intended, then program administrators may want to institute quality-improvement measures to improve implementation, or they may decide that the model or curriculum should be modified because practice in the field is suggesting a better approach. Program evaluators may use implementation information to make sure that the evaluation is a fair test of the intended intervention and not an evaluation of a poorly implemented shadow. In addition, evaluators can use implementation information to explain the results that their evaluations of outcomes eventually produce.Intensity of Services
Several of the reports in this journal issue suggest that families received fewer home visits than were intended by their models—in some cases, families averaged about 40% to 60% of the number of visits intended in the models (among those reporting this information were Parents as Teachers [PAT] and the Nurse Home Visitation Program). For many families, therefore, the intervention that was tested was not as intensive an intervention as the model developers planned.
That information by itself cannot determine the next steps, but it can alert program planners to some possible alternatives. For example, it might suggest that visitors need additional training in how to contact hard-to-reach families. Alternatively, perhaps the planned intensity level of services is simply unrealistic. Or perhaps the model needs to be modified to make it more interesting, and then parents will seek it out more readily.
No matter whether the results are used to improve existing practice or to alter the model, understanding these variations in “dosage” can have implications for understanding the eventual outcomes of any program. Some of the evaluations (for example, see the article by Wagner and Clayton in this journal issue) suggest that families that receive higher-intensity services benefit more than those that receive fewer visits; if this is correct, then knowing that the tested intervention is not delivering as many visits as planned may mean that the program will be less likely to produce the intended benefits.Content of the Visits
The content of the home visits may also stray from the intended curriculum. Most of the home visitation programs described in this journal issue have core curricula, but visitors may not always be able to deliver the lesson plans. A mother may be concerned about a sick infant, or may have had a very rough night with an abusing spouse, and she may want to talk about those issues rather than about the presumed topic for the day. The home visitor is likely to set aside the curriculum to address the mother's more pressing concerns. That ability to respond to parental concerns immediately and with sensitivity is one of the hallmarks of home visitation programs, and is widely seen as one of their strengths. Nevertheless, if such deviation occurs on a regular basis, or if individual home visitors consistently vary their programs as a reflection of their own backgrounds and experiences, then the service the home visitors provide is not the same as what program designers originally proposed.
The evaluations in this journal issue do not directly report on this aspect of program implementation, although Baker, Piotrkowski, and Brooks-Gunn suggest in their article about the Home Instruction Program for Preschool Youngsters (HIPPY) that variation in delivery of the intended curriculum does occur. Usually, evaluators try to capture the content of the services through (1) interviews with home visitors conducted some time after the visits occur (see the article by Baker, Piotrkowski, and Brooks-Gunn in this journal issue), (2) reports by home visitors summarizing what occurs during the lessons, or (3) results of analyses of videotapes of actual home visits (see the article by Wagner and Clayton in this journal issue).
If evaluations reveal that differences in program content have occurred, program planners may want to change the model to incorporate the changes the home visitors are making. Or, if they believe the differences reflect poor training, they may institute in-service training or closer supervision to encourage more faithful implementation.
From a methodological point of view, however, averaging results across all the home visitors in a particular program, with their own styles and session content, may disguise the differences present, and so mask program effectiveness. Because such individualization of services is inherent in home visiting, it is quite possible that this has occurred to some extent in all the evaluations reported in this journal issue. Only a careful analysis of information concerning what actually occurred during home visits would allow this to be disentangled, and such information is not available for most programs.Ancillary Services
The services that are provided in the home are often only a part of the total intervention. Some programs (for example, HIPPY and PAT) offer both home visits and parent group meetings. The HIPPY evaluation reported that some types of families were more likely to attend the group meetings than to persevere with the home visits. If outcomes such as children's development differed among these families, then knowing who actually made use of the offered services might help explain those results.
Most programs also seek to connect families with a range of services in the community, including health and child care services for the children, and employment, housing, transportation, and drug-treatment services for the parents. The services families are referred to and receive essentially become part of the program for those families, and may become a critical element in the observed success or failure of the home visiting program. Evaluators therefore sometimes track the community services families receive (for example, see the article by St.Pierre and Layzer on the Comprehensive Child Development Program [CCDP]), but this is an expensive task that requires the cooperation of participating families and community agencies to either complete interviews or approve the release of family records. For these reasons, few evaluations, including those in this journal issue, capture those ancillary services in great detail.
Determining which services families receive may have implications for program quality: If program administrators believe that the strength of their model depends upon linkages of families with other community institutions, but those linkages never occur, then the administrators may seek other strategies to forge those connections. From a methodological point of view, the variability in the extent to which families seek out and access services may decrease the likelihood that an evaluation will detect a difference in overall outcomes for all families. In addition, a model that relies heavily on other community services may suggest that the model's success in one community will not necessarily translate to another community.