Find the pdf version here or read it here
By Zunayed Al Azdi
When we began planning the Routine EPI Coverage Evaluation Survey in the Rohingya camps of Cox’s Bazar, the numbers were daunting. Thirty-three camps. 3,559 clusters. More than 25,000 households. Over 100 field personnel operating in one of the most densely populated refugee settlements in the world inhabiting more than 1.2 million forcibly displaced Myanmar nationals.
Yet behind every number was a child whose vaccination status mattered, a mother whose access to services mattered, and a health system that needed reliable evidence to make better decisions. From the very first training session, I emphasized that our work was not merely about completing clusters, but about safeguarding the future of children born and growing up inside the camps.
Leading this survey was not simply about executing a methodology. It was about managing precision at scale – combining statistical rigour with field leadership in a complex humanitarian setting.
Why This Survey Mattered
In recent years, the Rohingya camps have experienced outbreaks of vaccine-preventable diseases. Although routine immunization services were rapidly expanded following the 2017 influx, administrative coverage data alone could not alone answer critical questions about validity, equity, drop-outs, or zero-dose children.
This survey was commissioned by the World Health Organization (WHO) and designed to evaluate vaccination coverage among children aged 12-23 months and 24-35 months, as well as TT/Td coverage among women with infants. Many of us would agree that maternal TT/Td vaccination is not only a service indicator, rather a life-saving intervention that protects newborns from neonatal tetanus and reflects the strength of antenatal outreach systems. In the camps, beyond crude coverage, we aimed to assess valid doses, card retention, adverse events, reasons for non-vaccination, drop-out patterns, and equity gaps for both children and mothers.
I experienced that in fragile and densely populated environments, evidence is not a luxury. It is a necessity for preventing outbreaks and strengthening routine systems.
Designing Precision: The Methodological Backbone
The survey followed a two-stage cluster sampling methodology as recommended by the WHO. Each of the 33 camps was treated as a reporting stratum, so we needed to determine vaccination coverage for each camp separately. We calculated the effective sample sizes considering anticipated coverage, desired precision, and a design effect appropriate for cluster surveys.
All clusters were selected using probability-based approaches. Across the 33 camps, a total of 182 blocks, each comprising multiple sub-blocks demarcated by Community Health Workers (CHWs), were included in the sampling frame. This ensured that every geographic segment was represented and that no area within the camps was left behind. Within selected clusters, households were sampled systematically inside defined enumeration areas.
We implemented digital data collection using KoboToolbox, a Computer-Assisted Personal Interviewing (CAPI) platform, enabling built-in logical checks, automated skip patterns, photo documentation of vaccination cards, and near real-time monitoring. Embedded back-check mechanisms allowed supervisors to re-verify selected interviews while data collection was still ongoing.
Scale can easily compromise rigor if not carefully managed. Throughout the process, I ensured we actively avoided selection bias by providing each data collector with lists of randomly selected households obtained from UNHCR so that household selection remained strictly probability-based despite operational pressures.
Field Leadership in a Humanitarian Setting
It required more than technical design for managing 3,559 clusters across challenging terrain, which was more intense during the monsoon season. It required disciplined coordination, practical planning, and adaptive decision-making. For example, at one point, I had to help the supervisors with weather forecasting applications to plan field movements, starting earlier on days with predicted afternoon rain and delaying deployment when heavy downpours were expected. Thus, be it for weather or local blockade or political agitation or unavailable household members, we had to regularly and mindfully reorganize cluster visits accordingly in order to minimise disruption and ensure both data quality and team safety.
We deployed three data collectors in each of the 33 camps to ensure consistent presence across all locations. Each supervisor was assigned to oversee four to six camps to allow close supervision while maintaining operational efficiency. I developed daily monitoring dashboards to track completion status in each camp, response rates, and key data quality indicators. Our team actively monitored the anomalies, like unusual coverage patterns, GPS inconsistencies, or high missing data, and triggered immediate review and corrective action.
A daily challenge was to find the randomly selected households, as the UNHCR listing did not include GPS coordinates. Our teams had to navigate densely populated blocks using maps, local guidance, and careful verification. Weather disruption covered a large part of the data collection period, particularly during the monsoon, which required frequent rescheduling. Security concerns arose in a few camps, especially in Teknaf, delaying With the support of supervisors, I managed these realities by actively connecting teams with the local Camp-in-Charge (CiC) offices, majhis, and Community Health Workers (CHWs), ensuring safe access, community trust, and adherence to the sampling plan.
Training and Communication: Building a Shared Purpose
Large-scale surveys are only as strong as the people who implement them. Before entering the field, we trained a total of 120 data collectors and 8 supervisors to ensure both technical competence and shared understanding of purpose.
The training sessions were structured in four phases: one Training of Trainers (ToT) session for supervisors followed by three full training sessions for field teams. We covered details of EPI campaigns, vaccine types and schedules, valid dose definitions, common adverse effects, and detailed interview techniques. Particular emphasis was given to accurate vaccination date recording, card verification, probing without leading respondents, and maintaining neutrality.
Beyond technical content, I felt the necessity to consistently communicate the broader purpose of the survey, that our accuracy would influence decisions affecting both children and mothers inside the camps. I emphasized that a correctly recorded TT/Td dose for a mother could represent protection for her newborn and that misclassification could distort understanding of maternal and neonatal protection levels. This helped build accountability and motivation across teams.
After deployment, communication did not stop. Supervisors conducted refresher sessions at the camp level through yard meetings, reinforcing interview standards, clarifying emerging issues, and sharing feedback from the central monitoring team. Continuous communication, through in-person refreshers, phone calls, and WhatsApp coordination, ensured alignment between field realities and methodological expectations.
Data Quality and Data Management as a Daily Discipline
In a survey involving more than 25,000 households, data quality and data management are inseparable. Accuracy must be engineered into daily operations, and the architecture for managing large volumes of interviews must be as strong as the sampling design itself.
We actively discussed common survey biases like selection bias, recall bias, and information and reinforced them throughout training and supervision. Enumerators were trained not only to ask questions but also to probe carefully, verify home-based records accurately, document vaccination dates precisely, and understand how their work fed into a much larger data system.
Beyond quality control, I established a structured central data management system capable of handling thousands of incoming records each week. We adopted multi-layered quality control and data governance tactics: field-level supervision, embedded back-checks, and a central monitoring and validation unit that I single-handedly designed, trained, and supervised. The central monitoring team consisted of 10–12 dedicated data management assistants who worked continuously throughout the three-month data collection period. They reviewed every completed household interview against the randomly selected household lists to ensure correct identification and strict adherence to sampling. Also, systematically cross-checking each recorded vaccination date against the photographs of vaccination cards uploaded through KoboToolbox was a regular task to verify chronology, dose intervals, and logical consistency.
This required disciplined database management, structured tracking sheets, documented audit trails, and daily feedback loops. When any inconsistencies were found, like missing data, implausible dates, duplicate entries, or logical errors, they were logged and fed back to field teams for verification or correction within defined timelines.
What this experience reaffirmed for me is that precision is not achieved by chance; it is built through systems, expertise, and sustained oversight.
What I Learned
Managing this survey reshaped my understanding of public health leadership and methodology. Five lessons stand out:
- Scale amplifies both strengths and weaknesses. Systems must be robust before expansion. For example, when daily data uploads began exceeding a thousand interviews per day, even minor weaknesses in tracking or feedback could have multiplied rapidly. Because we had already established structured dashboards and a central validation team, we were able to detect small GPS mismatches and date-entry errors early before they scaled into systemic issues.
- Statistical design must anticipate field realities – segmentation, household listing challenges, and response variability. The absence of GPS coordinates in the UNHCR household listings meant enumerators had to rely on block maps, CHW guidance, and careful household verification to locate randomly selected households. Without anticipating this operational challenge during planning and training, probability-based sampling could easily have been compromised.
- Data quality is cultural. When teams internalize why accuracy matters, performance improves. During training, I reminded enumerators that a single incorrect vaccination year could misclassify a child as fully immunized or under-immunized, potentially affecting coverage estimates for an entire camp. Once teams understood that their precision directly influenced programme decisions for children in the camps, their attention to detail strengthened.
- Real-time monitoring transforms surveys from retrospective exercises into responsive systems. In a time-constrained project of this scale, daily dashboards alone were not enough. Continuous phone calls, WhatsApp groups with supervisors, and immediate voice feedback allowed us to resolve issues the same day they emerged – whether a skipped household, drop-out of data collectors, a date inconsistency, or a camp-wise slowdown due to rain. This rapid feedback cycle prevented small operational problems from escalating into larger sampling or data quality risks.
- In humanitarian settings, humility and adaptability are as important as technical expertise. At one point, a rumour spread in a camp that our teams were visiting households to recruit people to send abroad. A supervisor became understandably anxious. Instead of reacting defensively, I calmly coordinated with the local CiC office, engaged majhis, and shared our official documents and authorization letters. By re-establishing transparency and community trust, we were able to resume data collection safely. That incident reinforced for me that technical excellence alone is insufficient – community confidence and respectful engagement are equally critical.
From Numbers to Protection
Ultimately, this survey was not about clusters, sample sizes, or dashboards. It was about understanding whether children and mothers were adequately protected. It was about ensuring that both routine childhood immunization and maternal protection systems were functioning effectively to prevent avoidable illness and death.
Reliable data informs programme decisions, resource allocation, and targeted outreach to zero-dose and under-immunized children, as well as mothers who may have missed critical TT/Td doses during pregnancy. In the world’s largest refugee settlement, precision is not an academic ideal – it is a public health responsibility. The children born inside these camps deserve systems that are informed by evidence, guided by equity, and strengthened by accountability.
Managing 3,559 clusters in Cox’s Bazar was more than an operational achievement. It demonstrated that rigorous methodology, strong training systems, disciplined data management, and respectful community engagement can work together to produce evidence that protects lives.
If you would like to learn more about this survey, our methodology, or opportunities for collaboration in large-scale public health evaluations, please feel free to reach out to us at info@arkfoundationbd.org



