How We Rank: Inside ZACSUM's Data-Driven Approach
ZACSUM exists because we believe that choosing where to live shouldn't be based on vibes alone. A town might look great in Instagram photos, but what do the numbers say about safety? How does its cost of living compare to its peers? Is the downtown actually walkable, or does "charming" just mean "photogenic"? Our scoring engine is designed to answer these questions with data, and in this post, we're pulling back the curtain on exactly how it works.
The engine starts with raw data collection. For every town in our database, we gather metrics across several categories: affordability, safety, walkability, education, dining and culture, outdoor recreation access, healthcare access, and climate. Each category draws from specific, verifiable sources. Affordability uses Census Bureau median home values and Bureau of Economic Analysis cost-of-living indices. Safety uses FBI Uniform Crime Report data. Walkability comes from Walk Score. School ratings draw from state education department assessments. Dining density is calculated from business registry data — restaurants per capita within town limits.
Raw data is messy. A town's median home value of $350,000 doesn't mean anything in isolation — it's expensive for rural Mississippi and cheap for coastal Connecticut. This is where normalization comes in. We normalize every metric to a 0-to-100 scale using min-max normalization across the full dataset of towns in a given list. The town with the best value in a category gets a 100; the worst gets a 0; everyone else falls proportionally between. This allows us to compare fundamentally different metrics on the same scale.
Normalization solves the comparability problem, but it doesn't solve the importance problem. Not every metric matters equally, and different lists require different emphasis. This is where weighting comes in. Each list type — beach towns, mountain towns, commuter towns, retiree towns — has a custom weighting profile that reflects what actually matters for that lifestyle.
For our general Best Small Towns list, the weights are distributed relatively evenly. Affordability, safety, and walkability each carry significant weight, with smaller contributions from education, dining, healthcare, and outdoor access. For our Best Beach Towns list, we add a coastal access category and increase the weight on climate and outdoor recreation. For our Best Towns for Retirees list, healthcare access gets a substantial boost and education weight drops to zero.
The composite score for each town is a weighted sum of its normalized category scores. If a town scores 85 on affordability, 72 on safety, and 90 on walkability, and those categories are weighted at 25%, 20%, and 25% respectively, those three categories contribute 21.25 + 14.4 + 22.5 = 58.15 points to the composite. The remaining categories fill in the rest.
We deliberately avoid using any single proprietary "livability index" as an input. Every data point traces back to a public or licensed source that can be independently verified. This matters because transparency is the foundation of trust. If you disagree with a town's ranking, you can look at the individual metric scores and see exactly where it gained or lost points.
There are a few methodological choices worth highlighting. First, we normalize within each list rather than across all towns globally. This means a town's score on the Best Beach Towns list might differ from its score on a general list, because the comparison set is different. We think this produces more meaningful rankings — comparing beach towns to other beach towns, rather than to mountain towns.
Second, we handle missing data conservatively. If a town lacks data for a particular metric, we assign the median value for that metric across the list rather than excluding the town or assigning a zero. This prevents data gaps from unfairly penalizing small towns that may not appear in every federal dataset.
Third, we update our data annually. Rankings are not static. A town that builds a new hospital, sees a spike in crime, or experiences rapid housing price appreciation will see its scores change in the next cycle. We timestamp every ranking page so users know how current the data is.
The result is a system that balances rigor with accessibility. You don't need to understand normalization math to use our rankings — just look at the composite score and the category breakdowns. But if you want to dig deeper, the methodology is fully transparent.
We're not claiming our rankings are perfect. Every weighting choice involves judgment, and reasonable people can disagree about how much walkability should matter relative to affordability. What we are claiming is that our process is consistent, documented, and grounded in real data. That's more than most "best places to live" lists can say.
Explore any of our rankings to see the scoring engine in action. Every town profile includes a full breakdown of category scores, and every list page explains its specific weighting. The data is there — we just make it usable.