Fostering evaluation culture in the Philippines

Introduction

The growing concern on broad development issues, among others, poverty reduction, health and education outcomes, infrastructure development, has amplified the demand for good governance, transparency, accountability, and evidence-based decision-making in government. Hence, monitoring and evaluation are placed at the center of sound governance arrangements. They are necessary for the achievement of evidence-based policy making, budget decisions, management, and accountability (Mackay, 2007).


In the Philippines, monitoring and evaluation efforts on public sector concerns are undertaken by various government units and offices, and development partners, but at times independently. Factors that prompted monitoring and evaluation activities in the country some 30 years ago include the fiscal policy of intensified public works during decades following the war, which heightened the need for a government body to monitor implementation progress; growing size of the portfolio of official development assistance (ODA)-funded programs and projects, which prodded the creation of organic monitoring offices in government; and increasing undrawn balance of ODA loan commitments, which required portfolio-wide review (NEDA, 2011). With the implementation of several major infrastructure projects and efficiency orientation on ODA-funded programs and projects then, more efforts were devoted to monitoring. Demand for project results, as pronounced in legislative and executive issuances, increased with the growing number of completed projects in succeeding years. In parallel, development partners extended assistance to build results orientation and capacity in government in order to fulfill international commitments on development effectiveness. 


Notwithstanding, the implementation of programs and projects on a timely basis within costs allocated has remained to be a challenge, hence, the conduct of evaluation is not given the same degree of attention and importance as being done for monitoring. In this regard, fostering an evaluation culture in the country is called for. According to Murphy (1999), an organization that has a culture of evaluation has a known, shared policy and common understanding of the role of evaluation of their programs and services. McCoy et al. (2013) explains that the goal of an evaluation culture is achieved when there is a commitment to the role of evaluation in organizational decision-making, and when the perception that evaluation is purely an accountability requirement has shifted to it being viewed as an integral and valued part of the organization’s activities and purpose. This has been described as organizations becoming a ‘center of enquiry’ and moving away from solely delivering services towards becoming a ‘producer’ and ‘transmitter of knowledge’ (Owen, 2003).


In this paper, monitoring pertains to the regular collection, recording and reporting of information concerning any and all aspects of performance of a project or program that a manager or head of organization/agency or controlling agency may wish to know (NEDA, 2000). On the other hand, evaluation refers to the systematic and objective assessment of an ongoing or completed project, programme or policy, its design, implementation and results” (OECD, 2007). There are various kinds of evaluations, which can be classified based on timing, type of evaluator, object and function (NEDA, n.d.). Rigorous evaluations usually require use of statistical methods in establishing cause and effect between interventions and specified outcomes. 


Principal Institutions and Mechanisms Enabling Evaluation Practice in the Philippines

A combination of efficient policies, enlightened and strong leadership and competent institutions is indispensable to growth and development, and that acceptance of or resistance to reforms or even of the whole policy development process may be influenced by the presence and active participation of institutions supporting the process (Llanto, 2007). This section elaborates on the strategic role of government institutions and presence of enabling mechanisms in providing critical inputs to mainstreaming evidence-based policy and decision making in government.


A. Monitoring and evaluation of programs and projects at the national level


To strengthen efforts on the monitoring and evaluation of programs and projects, especially ODA-funded, at the national level, an organic monitoring unit has been created at NEDA. The reorganization of NEDA in 1987 involved the creation of the Project Monitoring Staff (PMS), which effectively gave the agency the mandate to monitor and evaluate development programs and projects. Under the NEDA-PMS, an Ex-post Evaluation Division was also established primarily to undertake evaluations of completed programs. While monitoring activities took off, NEDA’s evaluation mandate has been reduced to coordination of donors’ evaluation activities instead of actual conduct of evaluations, citing a lack of funding and trained personnel as major causes (Adil Khan, 1992).


With the enactment of the ODA Act of 1996, NEDA was mandated to review the performance of all ongoing projects which are funded in whole or in part by ODA. Evaluations of ODA-funded programs and projects were conducted, which were, however, mostly, if not all, undertaken by development partners through external evaluators. Starting 1999, there were growing efforts to support assessment of higher level results and outcomes of ODA-funded interventions given the mandate of implementing agencies to report on project outcomes and impact, as well as provide logical frameworks for all proposals for major capital projects to be subjected for government approvals. These requirements have been intended to improve evaluability of programs and projects.


B. Regional program and project monitoring and evaluation


The Regional Project Monitoring and Evaluation System (RPMES), established in 1989, is the primary institutional mechanism for program and project monitoring and evaluation of the government at the sub-national level. The establishment of the RPMES is in line with the administrative decentralization policy to strengthen the autonomy of government units in the regions and in recognition of the local autonomy of local chief executives and officials in spearheading development in their respective jurisdictions. The enhanced focus on results in recent years has prompted the need to conduct monitoring and evaluation beyond the traditional approach of determining project efficiency and effectiveness, and towards determining the achievement of social and economic development objectives contributing to national development goals. Hence, the RPMES in 2016 underwent review of its processes and protocols to better respond to the broader results agenda. Currently, the RPMES provides a system for the integration, coordination and linkage of all monitoring and evaluation activities in the region. However, efforts on initiating evaluations have to catch up with efforts undertaken for monitoring. Since 2016, 15 evaluation studies on region-specific programs and projects were commissioned and completed under the National Economic and Development Authority (NEDA)-administered M&E Fund, which is an annual appropriation in the national budget dedicated for the conduct of impact evaluations and other evaluation studies, among others.


C. Results-oriented medium-term development planning


NEDA, as the government’s premier socioeconomic planning body, steers and coordinates the consultative process and preparation of the country’s medium-term plan at the beginning of each administration. The Results Matrix (RM) was instituted in 2011 as an instrument designed to provide results orientation to the plan. It is anchored on the principles of results-based management (RBM), which is an approach that integrates strategy, people, resources, processes and measurements to improve decision-making with the goal of achieving intended outcomes, implementing and reporting on performance measurement, learning and adapting (ADB 2015). 


The RM, which follow the logical framework approach, intends to serve as guide in the planning, programming, and budgeting of implementing and oversight agencies, as well as enable the monitoring and evaluation of plan progress. It is at the apex of many mechanisms that demand greater accountability from the government, and is intended to integrate all other results initiatives. Hence, the RM, as envisioned, is a powerful tool that could help the government identify priority areas for development interventions, and monitor progress in these areas. However, conduct of evaluations, as means of measuring and obtaining evidence on results derived from strategies and priority interventions laid out in the plan and the RM, is not yet apparent.


D. Setting a National Evaluation Policy


In 2015, the country formalized its evaluation policy through the issuance of the National Evaluation Policy Framework (NEPF). The framework, which is aligned with the government’s policy on RBM, aims to govern the practice of evaluation of programs and projects receiving budgetary support from the government. It intends to provide clear guiding principles and evaluation standards that should serve as guidepost in the conduct of public sector evaluation, namely, setting evaluation criteria, ensuring evaluation competencies, creating dedicated monitoring and evaluation units in all government agencies, observing ethical standards, planning for evaluation that is aligned with the international best practices, promoting impartiality, and ensuring the dissemination and use of evaluation outputs. The operationalization of the NEPF, however, has been stalled due to impasse among major actors responsible for moving it forward as well as other issues and challenges encountered with its implementation, among others, the varying levels of evaluation capacities across agencies, very limited budget for the conduct of evaluations and difficulties in setting up independent evaluation units across agencies. Nevertheless, NEDA and other institutions have continuously undertaken steps to standardize the conduct of evaluations, enhance evaluation capacities of agencies and promote use evaluation results, as embodied in the NEPF, towards bringing the evaluation agenda of the government to the fore.


Strategies and initiatives towards engendering evaluation in the Philippines

Initiatives that have been undertaken to continuously promote the conduct of evaluations and supporting evidence-based decision-making in the government include the following:


A. Policy Window Philippines (PWP)


The PWP is a joint initiative of NEDA, the Department of Foreign Affairs and Trade (DFAT) of the Government of Australia, and the International Initiative for Impact Evaluation (3ie) which commenced in 2014. The program seeks to improve the culture of use of evidence in development policy and programming by generating high-quality and rigorous evidence on topical issues confronting decision-makers. It also aims to strengthen the capacity of government agencies to institutionalize and implement evidence-based decision-making to support the implementation of the NEPF.


NEDA and 3ie worked closely to generate demand for evidence among implementing agencies through extensive engagement and workshops that introduced participants to impact evaluation and the role of evidence in improved decision-making. From 2014 to 2020, five impact evaluation studies on priority programs of the government on labor and employment, social welfare and judicial reforms were completed. A high-level policy forum and capacity development activities on impact evaluation were also undertaken to support the government in increasing demand for and uptake of evaluations in policymaking.


B. NEDA M&E Fund


The M&E Fund was initially provided in the 2015 national budget to support the implementation of the NEPF. Specifically, it aims to capacitate NEDA to implement and manage evaluation as well as to enhance capacities of government agencies on monitoring and evaluation of priority programs and projects. To date, a total of 17 evaluation studies on infrastructure, agriculture, environmental protection and housing programs of the government have been completed. Other activities supported by the fund include the development of an information system for monitoring and evaluation of priority programs and projects and conduct of training workshops on impact evaluation. Some issues and challenges that were encountered in managing evaluations supported by the M&E Fund include: low capacity of end-users in designing and managing conduct of studies; delays in procurement from failed biddings, which are usually due to the thin market of evaluation consultants and training service providers; unavailability of baseline/data and reference documents; difficulty in securing commitment from stakeholders with regard to ownership of evaluation outputs; poor performance of contractors and low quality of outputs; low access and dissemination of evaluation findings; uncertainty in the utilization of evaluation results; and difficulty in securing stakeholder acceptance of evaluation findings and acknowledgement of the independence of the evaluation process.


C. Strategic M&E Project


In line with the objectives of the NEPF, NEDA partnered with the United Nations Development Programme (UNDP) in 2017 towards the implementation of the Strategic M&E Project. The project, which is supported by the M&E Fund, aims to undertake capacity development activities for NEDA and other government agencies towards more effective evaluation of the country’s national development plan and investment program. 


NEDA’s partnership with UNDP entails the latter’s assistance in: procuring evaluation consultants to support the management of the M&E Fund and to link its implementation with the operationalization of the NEPF; setting up an online portal where evaluation outputs will be easily accessible; preparing online modules on evaluation for easy reference of NEDA personnel; and formulating a National Evaluation Agenda, which would help generate the pipeline of evaluation studies to be undertaken by the government in the future.


Since 2017, six evaluation studies has been completed under the project. Other completed outputs of the project include a competency assessment framework and toolkit for crafting the National Evaluation Agenda.


Recently, the project also facilitated the creation of the NEDA Central Evaluation Unit, which is an interim unit to carry out evaluation functions, demonstrate the importance and benefits of taking a more active role in the conduct and management of evaluation studies, and bring the level of focus and depth on the practice of evaluation at par with monitoring.


D. M&E Network Forum


The first M&E Network Forum was launched by NEDA in 2011. The forum created a venue for learning and knowledge sharing for sustained community of practice for evaluation practitioners within and outside government. Since then, the forum was held annually to complement and showcase capacity development efforts and contribute towards cultivating a community of practice on evaluation in the country. To date, a total of eight forums and 11 webinars have been conducted by NEDA, with assistance of the United Nations Children’s Fund (UNICEF), Asian Development Bank (ADB) and UNDP. 


Conclusion

Recognizing the importance of evaluation, some countries have statutes institutionalizing variants of a national evaluation policy that applies to all branches and levels of government. Several countries have legislated national evaluation policies, but do not have the capacity to implement them. Nevertheless, these countries are making strides in developing a culture of evaluation (Rosentein, 2015).


As discussed above, evaluation in the country has not been widely and systematically integrated in the processes and systems of government. It has been conducted only on few and selected programs and projects, largely on the initiative of international development agencies. Also, the practice of evaluation remains relatively weak in its influence on policy-making in the country. While there were several attempts to institutionalize and legislate a national evaluation policy in recent years, these, however, remain to gain traction among major actors and stakeholders. Notwithstanding, given the deliberate efforts of institutions and establishment of enabling mechanisms that are geared towards promoting evaluation, the government may still be on the right path towards advancing the evaluation culture in the country.


Author

Aldwin Uy Urbina

Monitoring and Evaluation Staff

National Economic and Development Authority

Philippines


References

3ie (n.d.). Principles for Impact Evaluation. https://www.3ieimpact.org/sites/default/files/principles-for-ie.pdf


ADB (2015). Ramping Up Results-Based Management in the Philippines, Knowledge Showcases. Asian Development Bank. Accessed 28 August 2022 https://www.adb.org/sites/default/files/publication/160682/ks064-ramping-results-based-management-philippines.pdf 


Khan, M. Adil (1992). Monitoring and Evaluation of Development Projects in South East Asia: The Experience of Indonesia, Malaysia, The Philippines and Thailand. WBI working paper Washington, D.C. World Bank Group. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/427371468780926170/monitoring-and-evaluation-of-development-projects-in-south-east-asia-the-experience-of-indonesia-malaysia-the-philippines-and-thailand


Llanto, G. (2007). The Policy Development Process and the Agenda for Effective Institutions: The Philippines. Discussion Paper Series No. 2007-08, Philippine Institute for Development Studies. https://www.pids.gov.ph/publication/discussion-papers/the-policy-development-process-and-the-agenda-for-effective-institutions-the-philippines 


Mackay, K (2007). How to Build M&E Systems to Support Better Government. The World Bank, Washington, D.C. United States of America. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/689011468763508573/how-to-build-m-e-systems-to-support-better-government 


McCoy, Alicia & Rose, David & Connolly, Marie. (2013). Developing Evaluation Cultures in Human Service Organisations. Evaluation Journal of Australasia. http://dx.doi.org/10.1177/1035719X1301300103


Murphy, D. (1999). Developing a culture of evaluation. Paris: TESOL France. https://www.tesol-france.org/uploaded_files/files/dermot-murphy-2003.pdf 


NEDA (2011). M&E in the Philippines: Challenges and Prospects. 1st M&E Network Forum, Crowne Plaza Galleria Manila, Quezon City, Philippines.


NEDA (2000). Reference Manual on Project Development and Evaluation, Volume 1. National Economic and Development Authority.


NEDA (n.d.) N


EDA Ex-post evaluation manual. Manila, Philippines. National Economic and Development Authority.


NEDA and DBM (2015). NEDA-DBM Joint Memorandum Circular No. 2015-01: National Evaluation Policy Framework. National Economic and Development Authority. https://nep.neda.gov.ph/document/NEDA-DBM%20Joint%20Memorandum%20Circular%20No.%202015-01%20-%20National%20Evaluation%20Policy%20Framework%20of%20the%20Philippines.pdf 


OECD, (2007). Glossary of key terms in evaluation and results based management, OECD Publishing, Paris. https://www.oecd.org/dac/evaluation/dcdndep/39249691.pdf 


Official Development Act of 1996, Republic Act No. 8182 (1996). https://neda.gov.ph/oda-act-1996/


Owen, J.M. (2003). Evaluation culture: a definition and analysis of its development within organisations. Evaluation Journal of Australasia, Vol. 3, No. 1, pp. 43–47. https://doi.org/10.1177%2F1035719X030030010


Rosentein, B. (2015). Status of National Evaluation Policies. Global Mapping Report, 2nd Edition, Implemented by Parliamentarians Forum on Development Evaluation in South Asia jointly with EvalPartners. https://nec.undp.org/publications/status-national-evaluation-policies-global-mapping-report 


Footnotes

1 Such issuances include Republic Act No. 8182 or the ODA Act of 1996, as amended by RA 8555, and NEDA Board Resolution Nos. 3 and 14 s. 1999 adopting the guidelines incorporating results monitoring and evaluation in the government’s project approval process.

2 The PMS is currently the Monitoring and Evaluation Staff of NEDA.

3 Logical framework or logframe is a management tool used to improve the design of interventions, most often at the project level. It involves identifying strategic elements (inputs, outputs, outcomes, impact) and their causal relationships, indicators, and the assumptions or risks that may influence success and failure (OECD DAC, 2007).