Taking an inferential statistics course at the graduate level can be excruciating. I include “can”, because I predict an extremely high likelihood of excruciation, but cannot rule out error in my hypothetical, nonsense calculations. That being said, whether derived from raw data or bootlegged out of a CNN article, statistical information can bulk up any argument. When it is taken out of its cold, fairly bland mathematical context it is also possible for statistical information to be reported or interpreted as evidence for both sides of an argument. It’s a tool that journalists and argumentative conversationalists can employ for all the right or wrong reasons. As an example, consider the debate for accuracy in predicting student enrollment in public schools. Enjoy the power of math, without opening your calculator app, and watch Census data be used as a pro and then a con when projecting future class sizes.
Census data is a gold mine for statisticians, because it provides one data set containing a massive amount of data points on each of the over 323 million persons in the United States (for a “population clock” of the exact number as reported by the U.S. Census Bureau visit their official site). With all of those numbers (or “raw scores” as your Stats professor will call them), even a lack of statistical significance between two seemingly minute variables can have big implications. Results from the U.S. Census are used to determine, among other things, the federal funds allocated for public school according to geographical district. It has big consequences for American children whose quality educations rely on accurate and honest interpretation of these numbers from the federal government, and the town government, and all the way down to the school board.
Notably, public schools use Census results, like the number of residents per household or level of education attained by each member, to anticipate enrollment by region. Projected enrollment rates then determine certain eligibilities for school-wide federal funding and grants. Each student's demographics information, like socioeconomic status (SES) or disability, also determine eligibility for federally funded individual benefits like the National School Lunch Program. Getting the number right, or attaining a small margin of error, can be crucial locally as it can also have an impact on town or district-wide governments’ financial planning. Often, enrollment translates to the exact dollar and cent allowance per-pupil for books, supplies and resources through teachers and staff.
In some cases, the Census results are still insufficient for properly projecting enrollment in public schools. Completing the census is insisted upon all American residents over age 18, but for reasons like fear of deportation, lack of permanent address or difficulties with the language, it can be the case that many non-citizens don’t fill it out and don’t provide data. In numerous other cases, census information may be misreported; such as when an individual misunderstand questions and provides the inappropriate answers, moves or passes away. In 2002, Baltimore County, Maryland faltered when estimating for one elementary school, finding 930 students at the door of a building with a 707 student capacity (McMenamin, 2002). In a 2014 interview, Lynn, Massachusetts mayor Judith Flanagan Kennedy described her plea to the federal government for compensation after an unexpected influx of Guatemalan students, described as undocumented immigrants (Moran, 2014).
On the other hand, in 2013 the U.S. Census Bureau released a report finding overestimation of enrollment in public schools because respondents were found to indicate current enrollment despite having a diploma, for social desirability or had reported the wrong grade level though they are students (Bauman & Davis, 2013). For this reason, it would seem that public schools are projecting inflated enrollment rates and this runs the risk of Federal funds are being poured into the wrong districts while petering out in others. According to this same report, the state of New York overestimated enrollment rates in 2009-2010 compared to actual enrollment for grades 2, 6, 11 and 12, when using a Chi-squared test (Bauman & Davis, 2013) [This is a statistical test for comparing two sets of numbers that sounds like a Starbucks order]. In one example, a report from Michigan Public Schools, comparing enrollment estimates from 2009-10 to 2013-14, over 70 percent of all districts saw some enrollment decline (2015, October). The implications of overestimation are revealed after the start of the academic year when students are counted during an actual school day, and lower discrepancy can result in job loss, special-programs cuts, and destabilization of the local budget (2015, October).
So it goes with any hot-button, truly important topic: one side of the argument reports the numbers to their benefit, and the opposition somehow manages to report the same numbers in a way that makes them seem correct. Information is increasingly available to us, both the raw scores and the neat figures and percentages they became after being finely ground by statisticians. To the shrewd observer, it also seems to be reported in many conflicting ways. I, too, am guilty of grabbing fuzzy percentages from my memory of bygone coursework, or perhaps an episode of VICE, and hurling them into an argument. It’s easy to parrot statistics that we’ve heard, read or been told, but any good statistician checks their numbers with post-tests like a margin of error, a confidence interval, or effect size calculation (and a really good statistician reports them). See, nothing is 100% correct, not even math.