Written by: Ibrahim Hasan (Research Assistant)
Some vaccination cards were crisp and complete, the kind you can read in seconds. Others arrived as blurred photos taken in low light, with smudged ink and handwritten dates that could be read two different ways. I spent a lot of time zooming in until the text almost broke apart, asking the same question again and again: does this record match a real child, a real date, and a dose given in the right window?
When I joined ARK Foundation in October 2025, I entered the final phase of a large immunization survey in the Rohingya camps. The question behind the work was simple, but it carried weight: are immunization services reaching every eligible child, reliably and fairly? In a setting where families have already faced repeated disruption, routine services are not truly “routine.” Records go missing, cards get damaged, names are spelled differently across documents, and one unclear digit can change the meaning of an entire vaccination history.
Across the camps, this work felt like a stress test of the system. We were not only trying to understand whether children had received vaccines, but whether they received them on schedule, in a way that provides full protection. The survey focused on three groups: mothers who had given birth in the past year, and young children in early toddlerhood. For mothers, we looked at protection through tetanus vaccination during pregnancy. For children, we reviewed the routine set of childhood vaccines that prevent serious infections, including illnesses like tuberculosis and measles that can spread quickly in crowded environments.
My role was data verification, the part of the process that decides whether analysis rests on solid ground or on assumptions. Most of my days were spent inside a master dataset covering tens of thousands of households. I checked identities, confirmed birth dates from vaccination cards, and compared recorded doses against timing rules. It was repetitive work, but it was not mechanical. Every day brought small decisions with big consequences.
One lesson became unavoidable: a dose written on a card is not always the same as a dose that counts. If a vaccine is recorded but given too early, or too close to a previous dose, the child may not receive the intended protection. That is why we treated “coverage” as more than a tally. We treated it as a question of timing, sequence, and completeness. Verification was slow because it had to be. When two entries looked like the same child with slightly different spellings, we traced them carefully so we did not double-count without erasing anyone. When a birth date did not align with the vaccination timeline, we returned to the card image and checked again, sometimes discovering that a single unclear month had shifted a dose from “on time” to “too early.” These were small details on a screen, but they shaped what the data could honestly say.
When the findings came together, the story was mixed. Many families had records, and early contact with immunization services appeared strong. But the closer we looked, the more we saw patterns that simple totals can hide: doses recorded outside recommended timing windows, children starting the schedule but not completing later doses, and pockets where performance was consistently weaker than surrounding areas. We also noticed reports of injection-site problems that need follow-up to understand causes and prevent recurrence. None of this is visible if you only ask, “Was a dose given?” It becomes visible when you ask, “Was it given correctly, and is every area being reached?”
A major part of our work was checking whether the data held together across different parts of the camps, not only overall. We worked through duplicates, missing information, and location inconsistencies, and we rechecked records when documentation was unclear. When servers lagged or images were difficult to interpret, the work did not stop. It simply slowed down and became more careful. We held ourselves to a “zero tolerance for error” standard because one wrong assumption can misdirect follow-up away from children who missed doses and communities that need support most.
This project reshaped how I think about impact. Behind every row of data is a family’s attempt to protect a child, and behind every inconsistency is a risk of being missed by the system. Real insight is not only counting vaccines. It is checking whether they were given on time, delivered safely, and reaching families fairly.
Accuracy does not live in the spreadsheet. It shows up later, in who gets followed up, where resources go, and whether gaps are fixed before they harm the children the system is meant to protect.



