Left: Cartoon by Elkanah Tisdale from the March 26, 1812 edition of the Federalist-leaning paper The Boston Gazette showing the Massachusetts district newly created to favor the Jeffersonian Republicans in upcoming elections. Right: the wing and clawless version of the map. Thinking the actual district looked like a salamander, an editor at the paper declared the creature a Gerry-mander, after Eldridge Gerry, the Governor who signed the redistricting bill into law. That it is drawn as a dragon is, I believe, another level of editorializing, or just a miscommunication between editor and illustrator, rather than zoological confusion. Source: Article the First by Stan Klos

In 2016, Moon Duchin, along with Mira Bernstein, Ari Nieh, Justin Solomon, and Michael Sarahan, created the METRIC GEOMETRY AND GERRYMANDERING GROUP. MGGG

Anna Nowogrodzki, writing for Tufts Now, the newsletter of Tufts University where Drs. Duchin and Bernstein are on the faculty, introduces the group:

“The esoteric world of pure math doesn’t usually play much of a role in promoting fairness in the U.S. political system, but Tufts mathematicians Moon Duchin and Mira Bernstein believe that needs to change. It is math, they say, that could help overcome gerrymandering—the practice of drawing legislative districts that favor one party, class or race.” Source: Tufts Now 19 July 2018.

]]>I first heard of Solomon Wolf Golomb (1932- 2016) a few years ago. He was a mathematician, engineer, and professor of electrical engineering at the University of Southern California. He invented polyominoes, the game that inspired Tetris, among other things. Here are four quotes from Dr. Golomb that I think are very useful when thinking about models, mathematical and otherwise.

- Don’t apply a model until you understand the simplifying assumptions on which it is based and can test their applicability.

- The purpose of notation and terminology should be to enhance insight and facilitate computation – not to impress or confuse the uninitiated.

- Don’t expect that by having named a demon you have destroyed him.

- Distinguish at all times between the model and the real world. You will never strike oil by drilling through the map!

Image from: https://mathmunch.org/2016/05/05/solomon-golomb-rulers-and-52-master-pieces/

Check out the man himself at:

In the 2 July 2017 issue of Nature, there is an article in the Comment section about the science of measuring:

http://www.nature.com/news/metrology-is-key-to-reproducing-results-1.22348

The moral of the article is that if we all actually measured properly it would go a long way towards fixing our reproducibility problem. Good point, that unfortunately has to be made over and over and over.

Then, while looking for a good image to represent Metrology in this post, I learned that the 20th of May is World Metrology Day. According to http://www.npl.co.uk/world-metrology-day/,

“World Metrology Day is celebrated by over 80 countries each year on 20 May – the anniversary of the signing of the Metre Convention back in 1875. To this day, the agreement provides the basis for a single, coherent system of measurements that are traceable to the International System of Units (SI).”

Now THAT is worth cheering!

While this may not be something you think about much, some of you may recall what happened to the Mars Climate Orbiter because of a “failed translation of English units into metric units” (it probably crashed into mars). That’s right, not everyone used SI units, and, well, oops.

Here is my suggestion: You know how we all (are supposed to) check the batteries in our smoke detectors whenever the time changes? I think that every May 20th we should all check our units. Mark your calendars.

Image from: https://degiuli.com/2017/04/19/6-project-management-lessons-from-the-mars-climate-orbiter-failure/

]]>In 1977, Bell labs produced a movie called Powers of 10 (info at IMDB; available on YouTube) that clearly showed just how big big things are and just how small small things are. ”Starting from a view of the entire known universe, the camera gradually zooms in [increasing the magnification by 10 between each image] until we are viewing the subatomic particles on a man’s hand.” (IMDB description, and a fine description it is). It is still a great way to try to get a feel for the scale of things. Jambor’s essay has now alerted me to an interactive site that allows the user to zoom in and out, with much greater resolution, exploring the different scales of the variety of microscopic things we think about so much these days. It is here: http://learn.genetics.utah.edu/content/cells/scale/. One excellent example provided is the size of 12 pt Times regular type (this post is written in 12 pt Times, but remember, you may not be viewing this at life size…). Have fun sliding down, and up, the scales.

Note: Some more information about how to make scale bars can be found in chapter 7 of LabMath, and there is a discussion of scale bars on Research Gate. Nowadays, the software used to take pictures includes an option to add a scale bar. Importantly, though, you must calibrate the software so that it has the correct information for your microscope, and you may need to input the magnification manually, usually using a drop down menu. To confirm that you’ve got it right, take a picture of a ruler then put a scale bar on it; if they match, you’ve got it right.

*Thanks again to Dr. Jambor for contacting me.

]]>Check out this definition of standard (from http://www.oxforddictionaries.com/us/definition/american_english/standard): “An idea or thing used as a measure, norm, or model in comparative evaluations.” ‘Comparative evaluations’ is what I want to emphasize here – when you draw bars indicating the uncertainty in the data you collected, those bars should be comparable to everyone else’s bars. Standard error bars are not comparable and they make your audience have to do extra work to figure out what you found; how annoying! In contrast, standard deviations always mean the exact same thing! How nice for your audience!

The first step of reporting any data set (collection of measurements) is to describe the distribution of your data. To do that, you first make a frequency plot – the x-axis shows the values of your measurements, the y-axis shows the number of times you got each of those values, like in figure 1. Then, you summarize the distribution by saying where the center is and how the measurements are spread out around that center point.

Important aside: Thinking in terms of distributions will help with doing statistical analysis, too. Unless you are using non-parametric statistics, the statistics you will use tell you about distributions, not absolute numbers. As smart as they are, even statisticians cannot predict your data. So in many ways, I advise thinking about the distribution of your data as soon as you possibly can

Figure 1 shows identical frequency plots. Note, though, the scales of the y-axes have been changed to indicate different sample sizes; nevertheless, the distributions of the data points are exactly the same. If the distributions are identical, it follows that the description of the distributions should be identical. And the standard deviations are, indeed, identical: 1.6 and 1.6.

But look what happens to the standard error because of the difference in sample size: 0.3 vs 0.03 is a difference of an order of magnitude, even though the distributions are, you may remember, identical. Standard error is not a comparable evaluation. QED.

Here is a visual that shows what happens when the frequency distribution data are presented in summary form, the kind of figure you are more like to see in a paper:

The data on the right may “look” better, but that kind of spin is frowned upon in science, since your audience will assume it describes the spread of your data, but it does not. The standard error is not standard.

I hope it is pretty clear at this point that the standard error *cannot* be a “standard” way to describe the distribution of your data. Did someone tell you it was OK or traditional to use standard error as long as you say what your sample size was? True, to a point, but is it OK to divide your uncertainty by 10 as long as you say you did it? I recommend going to that person and saying “I’m confused. You said to use the standard error, but this easy to understand article by well respected biostatistician David Streiner (Maintaining standards: differences between the standard deviation and standard error, and when to use each. Can J Psychiatry. 1996 Oct;41(8):498-502; https://www.ncbi.nlm.nih.gov/pubmed/8899234) says that is wrong.” It’s a teachable moment; question authority.

The distribution of your data was what it was – don’t make it look like you are trying to hide something: share it proudly and accurately using the agreed upon standard. You surely worked hard enough to collect it. Also, to repeat myself, the international community of scientists has declared that the standard deviation is the correct way to report uncertainty; so, reporting standard error is like reporting length in cubits instead of meters, and that is just being ornery for no good reason.

So, where does that leave standard error?

There are two kinds of statistics: descriptive and inferential. Above, I’ve been pontificating about descriptive statistics – numbers that describe the distribution of the measures you actually made on your sample. *IF* your data are normally distributed, mean and standard deviation are useful summaries of what you found, because they are standard so just two numbers will give your audience an interpretable summary of your data.

Inferential statistics let you make inferences about the population from which the sample came. I think it is fairly intuitive that if you measure many more individuals (that is, your sample size is bigger), your estimate of the distribution of the entire population will get better and better. One way to think about this is to look at the extremes: if your sample size is 0, you will make an absolutely terrible estimate of the mean of the population. If your sample size equals the size of the population, your estimate will be perfect. In between, the bigger your sample size, the closer to perfection you get with your estimate of the whole population. Thus, it is when you are calculating inferential statistics that you should take into account the sample size.

One useful statistic to report when discussing your inferences about the population is the confidence interval. It tells your audience the range within which you believe the mean of the population would be found. As always with statistics (“statistics means never having to say you are sure”), you also tell your audience the degree of confidence you have in those intervals. To calculate confidence intervals, divide your standard deviation by the square root of the sample size then multiply that quotient by 1.96, if you want to indicate that you are 95% confident, or 2.58 if you are 99% confident. That quotient, for some reason, got a name: the standard error. In other words, standard error is just a rest stop on the road trip towards confidence intervals: you might be tempted to stop in for little chocolate donuts and coffee, but you really don’t want to linger there or brag about having been there. Just keep moving towards your goal.

I will end with a rule of thumb for interpreting graphs that (annoyingly) show standard error instead of standard deviation: in your head, double them, and that will give you a reasonable estimate of the 95% confidence intervals, although it will still leave you unclear about the data the authors collected. YOU will never make your readers do that, right?

]]>

It is called the Wason* 2-4-6 Task, (I’ve seen it referred to as the 2-4-8 test). It is the best exercise I’ve ever seen for demonstrating the perils of confirmation bias. It also stimulates great conversations about the importance of controls, the careful examination of assumptions, the importance of negative results, and, the biggie, how critical it is to attempt to DISPROVE your hypotheses, not prove them. When I’ve done it with colleagues as well as students, it has also stimulated discussions about experimental design, and different kinds of creativity, and how having multiple hypotheses can help prevent falling dangerously in love with one.

There are many versions on the web; I like this site:

https://explorable.com/confirmation-bias

It has a very nice explanation and a charming video. If you can, stop it before he gives the answer (at 2’55″) – see if you can guess the rule.

I cannot recommend this exercise more highly. I do it with every new student that crosses my path, as well as friends and family (I am such a nerd). Everyone, without exception, thinks it is a fun and intriguing experience. And forevermore, you can help students realize when they are thinking in a biased way just by saying “2-4-6” so it also provides a handy tool for reinforcing the ideas.

Go forth and joyously spread the news of the Wason 2-4-6 Task!

*Peter Cathcart Wason, 1923-2003. Among many achievements, he coined the term “confirmation bias”.

]]>I think there might be an error in the equation for converting RCF to rpm on page 140 of the second edition, hardcover.

Should the equation be:

*rpm*= (*RCF / (r* x 1.118 x 10^-**6**))^1/2

instead of 10^-5?

because the radius is measured in mm?

…

E. D.

Dear E. D.

Thank you for pointing out the issue. You are correct. The difference has to do with the units of radius.

If you look around, you will find that there is no convention for whether to report the radius of the rotor in mm or cm. Unfortunately, I didn’t make it clear that there are two versions in common use, and that they are both in the book specifically to show that. On page 139, it is written out correctly for mm, and it states explicitly that I mean radius in mm. On page 140, I switched to cm, with only a parenthetical comment that I had done that. I really should make that more obvious. When using cm, the exponent is –5, when using mm the exponent is –6.

One way to think about it is to imagine measuring the rotor in mm, now imagine measuring the exact same rotor in cm. The second measurement is going to be the first measurement divided by 10. But the RCF hasn’t changed. To take that “divide by 10” into account, therefore, you need to multiply the answer by 10, or you won’t get the same RCF. That “multiply by 10” gets folded into the constant so the exponent becomes 10^-5.

]]>

No, your eyes are not deceiving you; the title of the blog has changed slightly, from “How to Make Truly Terrible Graphs” to “How to Make Truly Terrible Tables.” This reflects the fact that it is possible to screw up (am I allowed to say that? Let’s make it “have things go amiss”) in areas other than graph-making. So, in the next few blogs, we’ll turn our attention to making tables for papers and presentations. (As a woodworker, I’ve screwed up making other types of tables, but that discussion will have to wait for a different forum.) The second part of the title may also raise some eyebrows; how can you be too accurate? After all, the need for accuracy has been drummed into our heads since we were scientists-in-training, learning the rules of the game at our supervisor’s knee. Whether we’re using an extremely expensive piece of lab equipment or designing a new paper-and-pencil scale, the mantra is the same: reduce the error in order to improve the reliability of our measurements and increase the accuracy. So in presenting our results in a table, how can we be “too accurate?”

As a matter of fact, it’s actually quite easy; all we have to do is ignore the imprecision inherent in any measurement and just keep printing out all of those numbers to the right of the decimal point. For starters, let’s take a look at Table 1, presenting some basic demographic information for a group in a study.

Table 1

Demographic Information

Variable |
Group 1 |
Group 2 |

Number of males/females |
6/4 |
5/5 |

Age in Years (SD) |
38.25 (10.05) |
37.60 (9.90) |

Education in Years (SD) |
13.45 (4.20) |
12.90 (4.15) |

Starting off with Age, we report that it’s 38.25 years for the 10 people in Group 1. If we determined age by asking the people how old they were at their last birthday, then on average, there’ll be an error of about 180 days. For example, at my last birthday, I was 73 years old, but I’m actually 73 years, 8 months, and 12 days old on the day that I’m writing this. (For those who want to send cards or presents, my actual birth date is 12 November; my mailing address is available on request.) We can improve the accuracy by asking people how old they are as of their nearest birthday, but that decreases the error to “only” 90 days, on average. Now, just what does that ‘5’ in the second decimal place represent? It’s 1/100^{th} of a year, or 3.65 days. Given the degree of inaccuracy in how we measured age to begin with, can we really justify this degree of accuracy in reporting the results, especially given that there are only 10 people in the group? If just one person in the group were replaced with another who is one year older, that would change the *first* decimal place from 2 to 3, or slightly more than one month. To claim that we know an average participant’s age to four days’ accuracy does violence to the data.

In fact, that overestimation of the precision of the data pales in comparison to our estimate of the participants’ education. Because the school year is about 200 days long (and they often seemed like very long days), then the last decimal place represents two days in class. Do you really think the data can support this degree of accuracy? I thought not.

If you think that these examples are fairly extreme, then (in the words of TV pitch men), “But wait – there’s more!” I just checked a Web site for the population of Brazil, and the number it reported was 206,769,143. Seriously? Even if that’s based on some equation taking into account the estimated birth and death rates, let’s examine where the numbers came from. There first had to be a census to establish the baseline, and that data-gathering was likely spread out over many weeks or months, covering not only major cities but also remote villages buried deep in the Amazonian forest. During that time, some people were dying and others being born. But let’s not forget the words of Sir Josiah Stamp (1880-1941), a statistician and former Director of the Bank of England: “The government are very keen on amassing statistics. They collect them, add them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But you must never forget that every one of these figures comes in the first instance from the *chowy dar* [village watchman in India], who just puts down what he damn pleases.” Then, the birth rate is 14.46/1,000 population, or slightly over 340 new souls *per hour*! (The figure for deaths is about 154/hour.) So, that final “143” in the population estimate is wrong within an hour of being written down. It would be far more “accurate” to say that the population is 206.8 million and leave it at that, indicating that the estimate is really just that – an estimate.

So remember, too much accuracy in a table is inaccurate.

]]>

Statistics Commentary Series: Commentary #9—Sample Size Made Easy (Power a Bit Less So) JOURNAL OF CLINICAL PSYCHOPHARMACOLOGY · MARCH 2015 · DOI: 10.1097/JCP.0000000000000297 · http://www.researchgate.net/publication/273463222

The reason I like this analogy so much is that magnification is intuitively clear to just about anyone who has ever stood far away from something, then moved in for a closer look. We know perfectly well that what we are looking at isn’t changing, but by changing position so that we can gather more information, we become more sure of what we are seeing. By gathering more data (having a larger sample size), we can be more sure* (have better statistical significance) of what we are seeing.

*Remember, p-values, which are what most people mean when they refer to statistical significance, *only* tell you the probability that you have incorrectly found a difference between two treatments (a false positive), so the words “can be more sure” are on purpose *not* “can know.”

]]>