This paper addresses an issue of real strategic importance to both the Australian government and Abt Global: how do we judge the overall performance of large, complex aid Facilities? By their nature, Facilities create intellectual and management challenges not present in less complex and ambitious development initiatives. Is it possible meaningfully to aggregate results arising from different programs? Just how much contribution to a high-level development goal is required, and how do you make such an argument convincing? The authors review the experience of developing Monitoring, Evaluation and Learning Frameworks (MELFs) in three Abt-managed Facilities in Papua New Guinea, Timor-Leste and Indonesia. Traditional forms of monitoring and evaluation focus on accountability, ex-post learning and evaluation, linear change and deliberate (rather than emergent) strategies. The paper finds that these approaches do not lend themselves well to the Facility model. The authors identify and explain areas that require deviation from standard donor MEL theory and practice. The paper found that teams needed to develop their own unique a mix of conventional and experimental approaches to MEL to try and overcome the challenges.
To read additional issues in this working paper series, visit the Abt’s Governance Soapbox.