Skip to main content

Deceptive Per Captia Rankings for Brain Cancer

We're working on a project to produce a report on the ratings that students give to their professors at the end of the semester. There is a big concern by the math people on the committee that we will give some professors an unfair bad (or good) rap because of the variability in these rankings. They don't want to report the ratings as a mean (average). Instead, they want to plot an uncertainty range. 

I was reading a book this week (How Not to Be Wrong by Jordan Ellenberg) that provided a great example of the risk of ranking things when there is uncertainty. It can lead to erroneous conclusions. Here is a summary of his argument that appeared in an NPR interview. Perhaps this sort of example will be helpful for the committee to share when teaching the general faculty about the new instrument.

If you take a rare disease like brain cancer and you look at its rate of incidents in different states, there are very big differences. And so you might say, "Well, I should go where this form of cancer is the rarest. Clearly something's going on in that state that is preventative against that disease." But when you look at the numbers, they're rather strange because at the very top of the list you see South Dakota with an extremely elevated rate of brain cancer, but if you look at the bottom, you see North Dakota with almost none. So that's very strange because South Dakota and North Dakota are not actually all that different.

But when you look at those numbers a little more closely, what you notice is that the states at the top of the list [South Dakota, Nebraska, Alaska, Delaware, Maine] and the states at the bottom of the list [Wyoming, Vermont, North Dakota and Hawaii, and the District of Columbia] have something in common, which is that they are very small. ... So basically hardly anybody lives in those states; that's what they have in common. And a sort of fundamental principle is that when you compute rates, the smaller the state, or ... the smaller the sample size, the more variation is going to be created just by random chance.

This seems analogous to the problem of small class sizes for ratings which cause us to draw a longer uncertainty bar on the report. One disgruntled student in a small class can cause a disproportionate movement on the class average. 

Comments

Popular posts from this blog

Hammers and Nails: Technology Push Design

"We need to refine our requirements first, before we look at tools." This is a common phrase that I hear. While I sympathize with the sentiment, I think it is frequently wasteful. I suspect that we'd get to the right requirements faster by looking at tools already available in a given problem space. Pushing the concept further, is it foolish to find a cool technology and then look for ways that that technology can apply to current problem spaces?  What if you don't even recognize you have a problem space? Without a constant search and openness, we'll miss many serendipitous opportunities. Here is BYU professor Larry Howell discussing this issue. I often enjoy doing something ... that is sometimes controversial. In this approach, rather than starting with a need, you start with a new technology and you search to identify a need that it can fulfill. This second more controversial approach is called "technology push design."   You can imagine t

The "True Cost" Phantom in Project Cost Tracking

I want to know if I should wear a coat when I go out the door on an unpredictable spring morning. The temperature from the weather app on my phone isn't going to match exactly the temperature just outside my front door, but it will be close enough to make the decision. If the question at hand is whether or not I should wear a coat, it would be a costly mistake to set up an elaborate system of thermometers around my property to try increase the precision of my measurement of the temperature. The added expense wouldn't yield any better decisions about whether to wear a coat, so why bother? We are interested in tracking the relative costs of the various projects we undertake in our organization. We must admit up front that we will never be able to measure the precise cost of each of our projects. Consider the following things you could include in the "true cost" measurement of a project. Where will you draw the line? An engineer reads an article on the bus that give

Process Ain't a Post-It

Some ITIL advocates insist that having a good process is separate from having a good tool. "If the process is right, you can do it on a post-it note. Putting it in the tool will speed things up, but it won't fundamentally change the nature of the process." This is rubbish. It may be true for small scale processes, but technology automation can open up new process possibilities that just wouldn't be possible without a technology assist. I think that we should plan our processes with a tool in mind that can accomplish the task. Think of a Service Catalog that gives an executive insight into the costs of the things he orders. He can dynamically scale up or down his order or services to meet his projected needs. He can tweak variables and make decisions because of the power of the tool. It gives him a visualization that simply wouldn't be available in a paper-based process. The NewScale demonstration (a prominent Service Catalog provider) really drove this point