Hi, Ivan of DC!
I forget that other nerds read that book as well:
Okay, PhD in chemistry here with a question for Bob Morse.
In the natural sciences there is a fundamental rule that when performing an experiment (or a survey in your case), you must never report more significant figures in your final data than were in the data you gathered during your experiment (or survey). And yet, that is precisely what you do here. You take survey results, which are only reported to one significant figure (1, 2, 3, 4, or 5) and then you average them and give us a number that has two significant figures. In the natural sciences, we consider that to be a flawed number because it is attributing more precision to your measurement than you could theoretically ever have.
I understand that one would average the data to see whether or not to assign a 4 versus a 5, but compiling the data to produce a more precise number than is theoretically possible is not scientific. And then you make matters worse by using those fictional numbers to determine 25% of the final rankings!
Can you please go into more detail about your methodology and why you make the assumption that averaging the data to a fake tenth decimal place is appropriate? Have you published your methods in any peer-reviewed articles that I could read?
I can't get the picture of Beaker from the Muppets out of my head on that one.