how you might approach the evaluation of a disaster case management program from a benefit-cost perspective? That is, if you were asked to assess the relative efficiency of such a program, how would you approach the problem of conducting a fair and reasonable evaluation? What do you think the key elements of your evaluation would contain? What categories of benefits and costs would you attempt to measure? What difficulties would you foresee in those measurements? Are there any other key considerations that you might think would be important?
Janis, A., Stiefel, K. M., & Carbullido, C. C. (2010). Evolution of a monitoring and evalua tion system in disaster recovery: Learning from the Katrina Aid Today National Case Management Consortium. In L. A. Ritchie & W. MacDonald (Eds.), Enhancing disaster and emergency preparedness, response, and recovery through evaluation. New Directions for Evaluation, 126, 65-77. 6 Evolution of a Monitoring and Evaluation System in Disaster Recovery: Learning From the Katrina Aid Today National Case Management Consortium Amanda Janis, Kelly M. Stiefel, Celine C. Carbullido Abstract Based on their personal experience and reflections, the authors describe and ana- lyze the monitoring and evaluation system employed by Katrina Aid Today (KAT), a program created by a consortium of partner agencies to provide dis- aster recovery case management services throughout the United States. In 2005, Hurricane Katrina devastated communities along the U.S. Gulf Coast and dis- placed scores of people on a scale previously unknown in the United States. The authors' reflections on the KAT model provide suggestions for future evaluations of disaster case management. The need for flexibility in disaster recovery, mon- itoring and evaluation program design, critical aspects for implementing and adapting an interagency monitoring and evaluation system, evolving interagency data collection, developing outcome measures, and emphasizing program eval- uation in disaster recovery are discussed. @ Wiley Periodicals, Inc., and the American Evaluation Association. NEW DIRECTIONS FOR EVALUATION, no. 126, Summer 2010 @ Wiley Periodicals, Inc., and the American Evaluation Association. Published online in Wiley InterScience (www.interscience.wiley.com) . DOI: 10.1002/ev.330 6566 ENHANCING DISASTER AND EMERGENCY PREPAREDNESS, RESPONSE H urricane Katrina devastated the communities of the Gulf Coast in the United States on August 29, 2005, hitting the coastline states of Louisiana, Mississippi, and Alabama. The hurricane's destruction cre- ated a displacement of persons on a scale previously unknown in modern Amer- ican history. Families were scattered throughout the country, resulting in a great need for long-term assistance and social support in order to achieve recovery. In response to the large number and diversity of households affected by Hurricane Katrina, the United Methodist Committee on Relief (UMCOR), with funding from the Federal Emergency Management Agency (FEMA), created a unified consortium of agencies providing disaster recovery case management services. Between October 2005 and March 2008, Katrina Aid Today (KAT) operated as a coordinated social service network based on a model that emerged following 9/1 1. As such, KAT established its consortium as a multiorganization, multistate coordinated recovery intervention and pro- vided case management services from December 2005 until March 2008. KAT's programmatic design included a shared on-line database, stan- dardized forms and reports, technical and programmatic support in addi- tion to the case management services provided by agencies. The consortium structure was comprised of one administering agency, nine national partner agencies (managers), 134 subagency organizations, and 16 local partner orga- nizations (field offices). KAT quickly scaled up in terms of operation and staff to roll out a standardized program design and implementation plan among the multiple organizations and states of operations. One of the core components of KAT, as stated in the original proposal to FEMA, was a monitoring and evaluation (M&E) system that would be used to ensure that the goals and objectives of this interagency project were being met in a transparent manner. The disaster response strategy of UMCOR foresaw the importance of monitoring and measurement for describing the performance of UMCOR and the consortium through report- ing to FEMA and the wider public. This transparency would presumably foster continued changes through the program design, best practices, and lessons learned in an accountable and responsible manner. The proposal also stated that a "key strategy (of the M&E design) would be to weave and infuse M&E activities into all levels and aspects of the projects administra- tion and implementation." KAT's original design included monitoring andEVOLUTION OF A MONITORING AND EVALUATION SYSTEM IN DISASTER RECOVERY 67 3. lnteragency data collectionhow it evolved 4-. Developing outcome measures 5 Emphasizing program evaluation in disaster recovery Katrina Aid Today's monitoring and evaluation system demonstrated a capacity to develop an interagency system in real time as learning and evo- lution took place. This has implications for other monitoring and evalua- tion systems that operate in the disaster relief and recovery sector. Flexibility in Program Design As with other monitoring and evaluation (M&E) systems, a results frame- work guided the system in outlining program components through the iden- tification of a program goal, inputs, outputs, and activities (see Figure 6.1). This results framework was developed by a consultant prior to the program's start and was accepted from the program's earliest days. The unprecedented scope and design of KAT meant there was a limited frame of reference for the program. Thus. program targets included in the results framework had to be based on calculated assumptions and expectations. The primary goal of the consortium was to provide disaster recovery case management ser- vices to 100,000 households to help overcome the multiple recovery barri- ers caused by Hurricane Katrina. Unable to predict the recovery from Katrina and without other comparable disasters, the results framework could not easily incorporate program outcomes. In KAT's model, case management was broadly defined as a complex process involving a skilled helper working together with an individual or family to identify and overcome barriers to recovery. This broad definition was meant to ensure that clients receive the same service, regardless of loca- tion or situation. What this failed to allow for, however, were differences between individual clients' situations and needs. KAT did not incorporate a differentiation between degrees of case management services, meaning the stages of the case management process from outreach to eligibility screening to information and referral to fully engaged recovery planning. The program target was only designed to account for the case management process in its entirety, from outreach to recovery planning to case closure. By separating components or degrees of case management, there may have been opportu- nity to create a framework to amount for the coordinated ServiCes provided by the case managers and case management programs along each client's path to recovery. Since KAT was implemented under a proposal and contract with FEMA, the program design did not include a control group. That is, it was difficult to determine how families not receiving any services fared as compared to the clients of the KAT program. Yet, if the program had differentiated more among the stages of disaster recovery case management, control groups EVOLUTION OF A MONITORING AND EVALUATION SYSTEM IN DISASTER RECOVERY 69 For example, it would have been possible to assess how families provided with information and referral USER) recovered in comparison with families assisted through recovery planning. In the future, efforts could be made in other rec0very initiatives to help incorporate a full range of services under one coordinated program, such as the relief (e. g., debris removal, food deliv- ery, initial assessment) to the recovery (e.g., home rebuilding, employment training, recovery planning). The outcomes or results of the program could then be tailored to the individual services that were provided, rather than to a single definition of service delivery. In addition to the internal elements of a program, disaster recovery occurs in an environment with local and national inuences. This impacts how, at what stage in the process, and where, recovery m50urces will unfold. These inuences make the trajectory of disaster recovery unpredictable. Moreover, they present challenges in developing realistic and measurable programmatic benchmarks or evaluative indicators. For example, resources dedicated to Hurricane Katrina recovery, such as financial grants for repairs/rebuilds to damaged dwellings, were rolled out at a local level. They were implemented through the use of long-term recovery committees (LTRCs) rather than at a state or national level. This translated into incon- sistent guidelines, forms, and available resources in different areas. Assist- ing the recovery of clients in such an unpredictable environment limits the use of measurable recovery benchmarks such as obtaining funds for the pur- pose of attaining recovery. Given these varying inuences, the consortium was limited from using outcome measurements, as they were not yet estab- lished or standardized. This in turn limited evaluative efforts of KAT and similar programs. Reflecting on KAT as a case example, future programs of similar effort and scope should operate from more informed service projections, as well as build into the evaluation design the elements that reflect the unpre- dic table nature of disaster recovery. One could anticipate that the imple- mentation of a program will be prolonged, given the scale of the disaster and the scope of the program, and thus target outcomes should account for such. Implementing and Adapting an Interagency M&E System KAT's MSIE system was designed by an external evaluation consultant and was implemented by the program's internal MSIE team. The evaluation con- sultant anticipated core components, such as shared technology and report- ing templates, but operational issues, such as the level of needed technical assistance by the consortium partners, were largely unforeseen. UMCOR had expected partners to have M&E capacity inhouse. Instead, most of the KAT \"AGhnn no.1.- nllnnnbn-l \"an\"... t.\" mial inc-can nu- tuml nunlnntnmn A.- .. uni-\"lb 70 ENHANCING DISASTER AND EMERGENCY PREPAREDNESS, RESPONSE agencies were reliant upon the direction and design of the MSIE system set forth by UMCOR for this specic project. The benet of this structure was that all performance-monitoring information was shared with the field in a consis- tent manner; however, this implied that M&E functions were often layered upon other job functions and competed with staff persons' other responsi- bilities. Thus, the quality in M&E functions had to be emphasized continu- ally in the form of technical assistance by KAT's MtSI'E team. Interagency Reporting. KAT's MSIE system used quarterly reports to collect quantitative and qualitative data of current and planned activities, including any implementation challenges. and recommendations for improving the consortium from the partners' perspective. These quarterly reports were submitted by the agency to their national partner, who in turn consolidated the reports (ranging from 7 field offices to more than 20 field offices). One report was submitted to the MrE team at UMCOR from each national partner. The national partner reports were further consolidated and submitted to FEMA on a quarterly basis according to the donor reporting requirements. From start to finish, the quarterly reporting process took less than 30 days, as the reports worked their way from the agency to partner to UMCOR to FEMA. Without a standardized reporting process in place, this single report to FEMA would not only have been difficult to construct. but its content would have been inconsistent and unable to meet the donor or program management needs. A standardized report template would com- municate expectations. enable uniform content, and fulfill responsibility to donors. Given the disaster field's dynamic environment and an agency focus on meeting clients' basic needs, a standardized reporting template may be the only way to ensure adequate reporting of the implemented work. Shared Technology. One unique aspect of the KAT M&E system was the consortium's use of shared Web-based technology. The CAN database was developed by a consortium of agencies, led by the American Red Cross following 9/11, and communicates and documents services that are pro- vided to disaster-impacted individuals in a coordinated and systematic way. CAN was used by KAT for service coordination, data collection, and pro- gram documentation. In order to ensure the database was consistent across the KAT consortium, KAT partners were contractually required to use CAN on a weekly basis for recording all client services. The CAN database allowed for the sharing of information among KAT partners and between case managers in different states. This technology enabled case managers to record client information and case management indicators, ranging from basic demographics to the outcome of each client's case, electronically. Sharing a database has also proven its efficacy when dealing with the transient nature of the Katrina-impacted population, largely subjected to displacement, evacuations, and relocations. If a client moved to a different part of the country. any disaster recovery agency with access to CAN, whether part of KAT or not, could see what was provided to the Plionl \"that-1191" H19 r-lionr'c oor-lo: \"who moi int [110 nrovinnc anal-Irv nr \"that EVOLUTION OF A MONITORING AND EVALUATION SYSTEM IN DISASTER RECOVERY 71 unmet needs remained. For KAT, CAN mitigated the duplicatiOn of Services and supported the continuity of relief support. CAN served as the primary source of data to monitor, analyze, and eval- uate the indicators set forth in the program's results framework at both the local agency individually and national level collectively. CAN allowed raw data to be exported uniformly to participating partners or agencies, based on criteria selected by the user. It also improved the accuracy and completeness of specific case management indicators that were critical to communicating the impact of KAT's programmatic work. In addition to exporting data, CAN also provided the KAT leadership team with a range of reports that were available through a password- protected website that allowed upto-the-minute status of the KAT caseload (i.e., percentage of open cases, services provided, etc). These reports pro- vided the leadership team with accurate and accessible information and in turn permitted program results to be easily communicated to donor and other stakeholders. In the early stages of program implementation and operation, the on- line database posed serious challenges for case manager users. KAT case managers were not only responsible for meeting with clients face to face, but ensuring that forms were completed and data entered from these forms into CANcorrectly. completely. and weekly. Included in KAT's implemen- tation training was CAN training for all KAT case managers. To support these initial trainings, field visits were made by the KAT MSIE team to review individualized caseload reports in order to identify where improve- ment in the data process could be made. Eventually, a few agencies hired data-entry support staff in meeting the data responsibilities. which proved effective for program evaluation purposes. Shared technology has emerged as a permanent fixture in the disaster field's relief and recovery efforts given the need for up-to-date communica- tion and coordination among multiple service providers. The CAN report- ing technology used by KATE M81713 system stands as a solid example of how a shared on-line database can enable interagency programs to provide ser- vices in a more coordinated way. Technical Assistance. Ongoing technical assistance provided by the KAT MtE team proved essential in establishing the program's standardized case management process and M'tE system. This was critical given the compressed time line of the disaster recovery program. The first activity of the M&E team was to host a kick-off workshop to educate the national part- ners on the basics of monitoring and evaluation and introduce the MIE tools specific to the KAT program. Providing technical assistance continued throughout the project as both McrE functions and tools evolved. The technical assistance provided included hands-on training such as field visits, informal one-on-one database trammg sessions, topical conference calls, and webinars. This mix of iii-person and technology-driven assistance was nec- essary since the consortium operated throughout 34- states. Recognizing a need