Statistical models can identify gerrymandering

By TARA ABRISHAMI | October 25, 2018

b8-gerrymander

PUBLIC DOMAIN

 

Researchers use statistical methods to create gerrymandering maps.

We’ve all seen examples of gerrymandering: seemingly absurd electoral maps designed to create districts that favor one political party over another. 

Gerrymandering has been a part of our political reality for at least two hundred years, and it’s notoriously difficult to prevent. In most states, the standard for a fair electoral map involves nothing more technical than eyeballing the districts for “compactness.” 

When district maps suspected of gerrymandering do end up in court, it’s extremely difficult to prove that the maps are unfairly partisan. Earlier this year, the Supreme Court allowed heavily partisan electoral maps to stand, likely because a clear, universal standard for what constitutes unlawful gerrymandering remains elusive. 

Two separate focus groups, the Metric Geometry and Gerrymandering Group at Tufts University and the Quantifying Gerrymandering group at Duke University, have recently been using mathematics to provide an answer. 

The key to their approach is to compare proposed district maps to other possible district maps. They use a statistical method called Markov chain Monte Carlo (MCMC) to simulate potential district maps and create a distribution, which they can then use to evaluate the proposed map.

MCMC is a way of conducting a random walk to sample from a distribution. In the case of gerrymandering, researchers are sampling from the space of all possible electoral maps. MCMC is necessary to simulate the distribution because the space is far too large to enumerate. The number of possible electoral maps in North Carolina, for example, is far more than the total number of atoms in the universe. 

MCMC is a powerful tool because the distribution given by the random walk sample will be close to the actual distribution of the underlying space. In this case, using the proposed map as a starting point, the algorithm walks to another possible map at each step, where the new map is determined probabilistically based on the previous map. After thousands of steps, the collection of all the possible maps visited by the random walk forms the sample.

Researchers use election data to create a distribution of election outcomes based on the map sample. Specifically, given the voting results of each precinct, researchers can determine how many seats would go to Republicans and how many to Democrats for each potential map in the sample. They can then determine how statistically unlikely the election outcome for the proposed map is. 

For example, consider the state of North Carolina, which has 13 congressional districts. Currently, three congressional representatives are Democrats and 10 are Republicans. 

Suppose you conducted an analysis of 20 thousand possible random electoral maps and determined that in 95 percent of those maps, nine or fewer seats were held by Republicans. This would be a strong indication that the current map is gerrymandered in favor of the Republicans. This is a simplified model of the actual statistics that go in to evaluating a proposed map, of course, but the underlying concepts are the same. 

Indeed, researchers at Duke University did an analysis of North Carolina’s congressional districts and found, using MCMC and a sophisticated statistical analysis, that North Carolina’s districts in 2012 and 2016 were heavily gerrymandered.

Using mathematics brings rigor to evaluating partisan gerrymandering and decreases reliance on intuition, which can often be misleading. Consider, for example, the case of Massachusetts, explained in Scientific American by Moon Duchin, a professor at Tufts and the founder of the Metric Geometry and Gerrymandering Group. 

In Massachusetts, about 30 percent of constituents vote Republican, but all nine of the Massachusetts representatives are Democrats. This could appear suspicious: One might expect that in a fair electoral map, only three of the seats would go to Republicans.

Duchin used the 2006 Senate election between Ted Kennedy (D) and Kenneth Chase (R) to test the fairness of the districts. The results showed that, based on precinct voting data, no district in any possible electoral map of Massachusetts could have gone to Chase in the election. Most precincts had around 30 percent of votes going to Chase, which can never be enough to flip a district.

The techniques championed by researchers at Tufts and Duke could lead to a new era of using mathematics to evaluate partisan gerrymandering. Their tools would provide an objective, scientific way of determining which electoral maps are fair and which aren’t.

Patrick Kennedy, a Hopkins senior majoring in Math and Applied Math and Statistics, explained why he thinks math is important in politics. 

“We study statistics because it’s the purest way to analyze data,” Kennedy wrote in an email to The News-Letter. “I think politicians have a duty to listen to mathematicians on this topic.”

There’s already evidence that courts may be amenable to mathematical arguments in gerrymandering cases. In the meantime, mathematicians continue to develop better tools to analyze gerrymandering, in the hopes of providing a solution to a tricky political problem and strengthening our democracy.

Comments powered by Disqus

Please note All comments are eligible for publication in The News-Letter.