Monitoring and Evaluation of National Vaccination Implementation: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

An effective Monitoring and Evaluation (M&E) framework helps vaccination programme managers determine progress and effectiveness for agreed indicators against clear benchmarks and targets. Improving national vaccination programme implementation requires collection and analysis of data on relevant vaccination components. Monitoring and evaluation (M&E) or more recent Monitoring, Evaluation, Accountability and Learning (MEAL) frameworks support decision-making by consolidating available information on agreed indicators, benchmarked targets, and methods to collect, analyse, and report necessary data to strengthen vaccination programmes.

  • vaccination
  • monitoring
  • evaluation
  • indicators
  • global health

1. Introduction

Improving national vaccination programme implementation requires collection and analysis of data on relevant vaccination components. Monitoring and evaluation (M&E) or more recent Monitoring, Evaluation, Accountability and Learning (MEAL) frameworks [1] support decision-making by consolidating available information on agreed indicators, benchmarked targets, and methods to collect, analyse, and report necessary data to strengthen vaccination programmes [2]. M&E frameworks are usually aggregated into pre-, peri-, and post-vaccination phases, and include elements of vaccine procurement, transport, storage, staff training, communication, coverage, adverse effects, and identification of successes and failures [3].
Planning effective M&E for national rollout of new vaccines, such as for COVID-19, can be strengthened by learning from previous vaccination experiences, particularly those targeted beyond routine childhood populations [4]. A virtual expert roundtable, hosted by the Saw Swee Hock School of Public Health in January 2021, identified key M&E framework components to inform COVID-19 vaccination. This included best practice guidelines, particularly by the World Health Organization (WHO) [5], but few lessons or experiences of using M&E frameworks and selecting and appropriately benchmarking indicators within vaccination programme M&E. Practical details of these experiences could help governments and technical partners in planning, implementing, and assessing M&E for new vaccine implementation. Lessons learnt from assessment experiences worldwide could help inform national efforts to improve routine and vaccine-specific data collection and analyses and support vaccination programme strengthening, particularly in resource-constrained settings, which aligns with the Immunization Agenda 2030 goal to make vaccination available to everyone, everywhere [6].

2. Coverage Indicators

Most (32/43; 74%) described elements of target population estimation, equity, and uptake, primarily of routine childhood vaccines. Thirteen (30%) included targeting or population estimation, although most did not describe indicators in depth and how they were used differed by setting and resource availability. For example, Lacapere et al. roughly estimated measles-rubella vaccination coverage by dividing the number of vaccine doses given by the estimated population for each district in Haiti [25], while D’Ancona et al. used an immunisation register to estimate coverage in Italy [26]. Bianco et al. determined the number of foreign workers eligible for vaccination in Italy through screening [27]. Manyazewal et al. used WHO Reaching Every Community (REC) mapping of community locations and characteristics to help estimate coverage targets for five vaccines in Ethiopia [21].
Only three (7%) included indicators to examine equity or disaggregate data. For example, Sarker et al. compared immunisation coverage among children aged 12–59 months in Bangladesh across socioeconomic and demographic factors, finding disparities by parental education and mothers’ access to media [28]. Wattiaux et al. considered equity aspects of vaccination rollout by comparing hepatitis B immunisation incidence between indigenous and non-indigenous Australians [29]. Geoghegan et al. examined whether women received COVID-19 vaccine when pregnant or received routinely recommended vaccines in pregnancy in Ireland [30].
Sixteen sources (37%) discussed uptake indicators. Most focused on general target populations with minimal discussion on vaccine coverage for migrants and refugees and none on the elderly, people with disabilities, or other potentially vulnerable groups. However, Bawa et al. estimated oral polio vaccination coverage among underserved hard-to-reach communities in Nigeria, defining them based on difficulty of terrain, any local or state border, scattered households, nomadic, water-logged/riverine, or conflict-affected and thus requiring outreach services [31]. More generally, Lacapere et al. calculated numbers of municipalities that reported achieving 95% vaccination coverage for measles-rubella vaccine in Haiti [25]. Only two sources (7%) described coverage indicators beyond the second year of life. Muhamad et al. calculated total HPV vaccine doses delivered through school-based outreach to evaluate the effectiveness of free vaccination for schoolgirls in Malaysia [32], while Beard et al. estimated 40–50% coverage of pertussis vaccination among pregnant women in Australia, based on number of births and consent forms returned centrally [33].
Coverage lessons were varied. Aceituno et al. described challenges of collecting high-quality data in resource-constrained settings [22]. Soi et al. noted that using feedback loops to guide policy decision must be pragmatic, as they are often too slow—e.g., Gavi’s HPV demonstration project policy required countries to demonstrate adequate coverage before applying for rollout funding, which could take years [17]. Alam et al. found automation of EPI scheduling can improve coverage and enhance monitoring, particularly in remote areas [34]. Edelstein et al. found that data triangulation and inclusion of routine data, in countries with good national records, can help identify vulnerable groups and monitoring of vaccine coverage [35]. Lanata et al. found lot quality assurance sampling helped identify small areas with poorer vaccination coverage in rural areas of Peru with dispersed populations, thus improving coverage and equity monitoring [23]. Aceituno et al. found staff understanding of cultural-linguistic context improved coverage and vaccination continuity in Bolivia [22].

3. Operational Indicators

Only two (5%) sources mentioned health service capacity indicators. Manyazewal et al. assessed immunisation services availability, regular static immunisation services delivered, adequate outreach sites, catchment area mapped for immunisation, separate and adequate rooms for immunisation services and storing supplies, all planned outreach sessions conducted, health education on immunisation provided, and immunisation services availability in all catchment health posts in Ethiopia [21]. Walker et al. assessed surveillance feedback reports, timely reporting, and number of districts with populations not receiving immunisation services [36].
Three (7%) sources mentioned supply chain and logistics indicators. Walker et al. used cold chain and logistics data from facility inventory logs for routine immunisation to identify gaps in vaccine supplies and equipment, such as the number of facilities with insufficient supply of syringes and diluent, and number of facilities with inventory logs consistent with vaccine supply [36]. Hipgrave et al. reviewed evidence on thermostability of hepatitis B vaccine for pregnant women when stored outside the cold chain in China [16]. Özdemir et al. assessed cold chain storage and gaps for a hepatitis B vaccine in Turkey [37]. Manyazewal et al. assessed adequacy of fridge-tag 2 units for temperature monitoring, refrigerator spare parts, vaccine request and report forms, and inventory documents in Ethiopia [21].
Five (12%) sources mentioned human resource indicators. Hall et al. assessed the number of English health-workers vaccinated against COVID-19 stratified by dose, manufacturer, and day [13]. Cherif et al. assessed numbers of health-workers participating in vaccination activities, e.g., epidemiological surveillance, adverse event monitoring training, and supervisions in Abidjan [38]. Carrico et al. assessed numbers of states mandating health-worker vaccination in the US [39]. Manyazewal et al. assessed numbers of experts assigned for immunisation and numbers of immunisation focal persons to evaluate effectiveness of system-wide continuous quality improvement for national immunisation programme performance [21]. Walker et al. assessed numbers of supervisory visits conducted, documented in writing, surveillance guidelines observed, surveillance discussed at supervisory visits, and if an operational plan was observed [36].
Two (5%) sources included indicators for vaccination costing. Hutubessy et al. calculated the incremental costs to the health system of HPV vaccination for adolescent girls through schools, health facilities, and other outreach strategies in Tanzania [20]. Walker et al. measured the number of districts in Kenya with insufficient financial resources for key surveillance elements for acute flaccid paralysis [36].
Multiple sources discussed operational lessons. D’Ancona et al. used an observational survey to show that decentralised health systems such as in Italy could result in fragmented immunisation registries and information flow across regions [26]. Ward et al. described how data improvement teams, allocated to all districts, enhanced the quality of vaccine administrative data in Uganda by helping identify data inaccuracies and providing on-the-job data collection training [40]. Dang et al. used qualitative research to describe an approach to optimise vaccination information in Vietnam, which included establishing a partnership between the Vietnamese Ministry of Health and mobile network operators [17]. Soi et al. described the importance of physically co-locating evaluators from different disciplinary backgrounds, and suggested that including evaluators in decision-making could enrich outcomes [17].

4. Clinical Indicators

Five (12%) sources described clinical indicators, primarily counting numbers of adverse events following immunisation (AEFI). For example, Aceituno et al. assessed monthly reporting of adverse events and severe adverse events, details of any deaths, reasons for all withdrawals, and infant and maternal death rates below Demographic and Health Survey rates for Bolivia [22]. Loughlin et al. conducted a post-marketing evaluation to assess the number of confirmed cases of intussusception or Kawasaki disease among infants who received Rotavirus vaccine in the US compared with historical cohort data from diphtheria-tetanus-acellular pertussis vaccination [41].
Lessons were relatively limited. Vivekanandan et al. described the positive role of health-workers in assessing vaccine safety indicators in India [42]. Cherif et al. similarly noted that improving AEFI system performance required improved health-worker training, data analysis, and community engagement [38]. Ijsselmuiden et al. suggested that vaccination targeting at-risk populations, such as for Hepatitis B, should ensure concomitant disease surveillance to reduce morbidity and mortality [20]. Similarly, Beard et al. suggested combining AEFI and syndromic surveillance in emergency departments to monitor numbers of pertussis vaccine adverse events and supplementing maternal influenza vaccination AEFI monitoring with mobile phone text messages in Australia [33].

This entry is adapted from the peer-reviewed paper 10.3390/vaccines10040567

This entry is offline, you can click here to edit this entry!