Skip to main content

Deceptive Per Captia Rankings for Brain Cancer

We're working on a project to produce a report on the ratings that students give to their professors at the end of the semester. There is a big concern by the math people on the committee that we will give some professors an unfair bad (or good) rap because of the variability in these rankings. They don't want to report the ratings as a mean (average). Instead, they want to plot an uncertainty range. 

I was reading a book this week (How Not to Be Wrong by Jordan Ellenberg) that provided a great example of the risk of ranking things when there is uncertainty. It can lead to erroneous conclusions. Here is a summary of his argument that appeared in an NPR interview. Perhaps this sort of example will be helpful for the committee to share when teaching the general faculty about the new instrument.

If you take a rare disease like brain cancer and you look at its rate of incidents in different states, there are very big differences. And so you might say, "Well, I should go where this form of cancer is the rarest. Clearly something's going on in that state that is preventative against that disease." But when you look at the numbers, they're rather strange because at the very top of the list you see South Dakota with an extremely elevated rate of brain cancer, but if you look at the bottom, you see North Dakota with almost none. So that's very strange because South Dakota and North Dakota are not actually all that different.

But when you look at those numbers a little more closely, what you notice is that the states at the top of the list [South Dakota, Nebraska, Alaska, Delaware, Maine] and the states at the bottom of the list [Wyoming, Vermont, North Dakota and Hawaii, and the District of Columbia] have something in common, which is that they are very small. ... So basically hardly anybody lives in those states; that's what they have in common. And a sort of fundamental principle is that when you compute rates, the smaller the state, or ... the smaller the sample size, the more variation is going to be created just by random chance.

This seems analogous to the problem of small class sizes for ratings which cause us to draw a longer uncertainty bar on the report. One disgruntled student in a small class can cause a disproportionate movement on the class average. 

Comments

Popular posts from this blog

Making People Feel Stupid: A Cardinal Sin in Design

People will go to great lengths and inconvenience to avoid appearing or feeling stupid. A great example of when design makes a user feel stupid comes from Alan Coopers 1999 book The Inmates are Running the Asylum on page 24. Cooper is talking about the keyless entry system on his car keys. "The large button locks the car and simultaneously arms the alarm. Pressing the button a second time disarms the alarm and unlocks the car. There is also a second smaller button labeled 'Panic.' When you press it, the car emits a quiet warble for a few seconds. If you hold it down longer, the quiet warble is replaced by the full 100-decibel blasting of the car alarm, whooping, tweeting, yowling, and declaring to everyone within a half-mile that some dolt--me--has just done something execrably stupid. What's worse, after the alarm has been triggered, the little plastic device becomes functionally inert, and further pressing of either button does nothing. The only way to stop that ho...

Beyond Scrum?

[Adapted from a post to our internal Slack team.] My manager has been working to get an agile consultancy into our university's central IT department to help us progress in our journey toward being more agile. I hope that the training and coaching we receive will focus more on the root principles of value in agile processes rather than on a single process like Scrum. Are there any root agile principles that you think we need to be better at embracing? Here are some that come to mind for me. Develop functionality vertically instead of horizontally. You don't create the database layer all the way, and then the web services layer all the way, and finally--after 9 months--start to create the web user interface. Instead, you find a way to introduce a complete feature that touches all those technology layers so that you can get real feedback about the usage and value of the system or feature. Be willing to throw things away. If we're going to experiment, we have to be okay ...

Hammers and Nails: Technology Push Design

"We need to refine our requirements first, before we look at tools." This is a common phrase that I hear. While I sympathize with the sentiment, I think it is frequently wasteful. I suspect that we'd get to the right requirements faster by looking at tools already available in a given problem space. Pushing the concept further, is it foolish to find a cool technology and then look for ways that that technology can apply to current problem spaces?  What if you don't even recognize you have a problem space? Without a constant search and openness, we'll miss many serendipitous opportunities. Here is BYU professor Larry Howell discussing this issue. I often enjoy doing something ... that is sometimes controversial. In this approach, rather than starting with a need, you start with a new technology and you search to identify a need that it can fulfill. This second more controversial approach is called "technology push design."   You can imagine t...