This has come up before:

– Basketball Stats: Don’t model the probability of win, model the expected score differential.

– Econometrics, political science, epidemiology, etc.: Don’t model the probability of a discrete outcome, model the underlying continuous variable

– Thinking like a statistician (continuously) rather than like a civilian (discretely)

– Message to Booleans: It’s an additive world, we just live in it

And it came up again recently.

Epidemiologist Sander Greenland has written about “dichotomania: the compulsion to replace quantities with dichotomies (‘black-and-white thinking’), even when such dichotomization is unnecessary and misleading for inference.”

I’d avoid the misleadingly clinically-sounding term “compulsion,” and I’d similarly prefer a word that doesn’t include the pejorative suffix “mania,” hence I’d rather just speak of “deterministic thinking” or “discrete thinking”—but I agree with Greenland’s general point that this tendency to prematurely collapse the wave function contributes to many problems in statistics and science.

Often when the problem of deterministic thinking comes up in discussion, I hear people explain it away, arguing that decisions have to be made (FDA drug trials are often brought up here), or that all rules are essentially deterministic (the idea that confidence intervals are interpreted as whether they include zero), or that this is a problem with incentives or publication bias, or that, sure, everyone knows that thinking of hypotheses as “true” or “false” is wrong, and that statistical significance and other summaries are just convenient shorthands for expressions of uncertainty that are well understood.

But I’d argue, with Eric Loken, that inappropriate discretization is not just a problem with statistical practice; it’s also a problem with how people think, that the idea of things being on or off is “actually the internal working model for a lot of otherwise smart scientists and researchers.”

This came up in some of the recent discussions on abandoning statistical significance, and I want to use this space to emphasize one more time the problem of inappropriate discrete modeling.

The issue arose in my 2011 paper, Causality and Statistical Learning:

More generally, anything that plausibly could have an effect will not have an effect that is exactly zero. I can respect that some social scientists find it useful to frame their research in terms of conditional independence and the testing of null effects, but I don’t generally find this approach helpful—and I certainly don’t believe that it is necessary to think in terms of conditional independence in order to study causality. Without structural zeros, it is impossible to identify graphical structural equation models.

The most common exceptions to this rule, as I see it, are independences from design (as in a designed or natural experiment) or effects that are zero based on a plausible scientific hypothesis (as might arise, e.g., in genetics, where genes on different chromosomes might have essentially independent effects), or in a study of ESP. In such settings I can see the value of testing a null hypothesis of zero effect, either for its own sake or to rule out the possibility of a conditional correlation that is supposed not to be there.

Another sort of exception to the “no true zeros” rule comes from in- formation restriction: a person’s decision should not be affected by knowledge that he or she does not have. For example, a consumer interested in buying apples cares about the total price he pays, not about how much of that goes to the seller and how much goes to the government in the form of taxes. So the restriction is that the utility depends on prices, not on the share of that going to taxes. That is the type of restriction that can help identify demand functions in economics.

I realize, however, that my perspective that there are no true zeros (information restrictions aside) is a minority view among social scientists and perhaps among people in general, on the evidence of [cognitive scientist Steven] Sloman’s book [Causal Models: How People Think About the World and Its Alternatives]. For example, from chapter 2: “A good politician will know who is motivated by greed and who is motivated by larger principles in order to discern how to solicit each one’s vote when it is needed” (p. 17). I can well believe that people think in this way but I don’t buy it: just about everyone is motivated by greed and by larger principles. This sort of discrete thinking doesn’t seem to me to be at all realistic about how people behave—although it might very well be a good model about how people characterize others.

In the next chapter, Sloman writes, “No matter how many times A and B occur together, mere co-occurrence cannot reveal whether A causes B, or B causes A, or something else causes both” (p. 25; emphasis added). Again, I am bothered by this sort of discrete thinking. I will return in a moment with an example, but just to speak generally, if A could cause B, and B could cause A, then I would think that, yes, they could cause each other. And if something else could cause them both, I imagine that could be happening along with the causation of A on B and of B on A.

To continue:

{focus_keyword} Deterministic thinking: a problem in how we think, not just in how we act Screen Shot 2019 04 02 at 10

{focus_keyword} Deterministic thinking: a problem in how we think, not just in how we act Screen Shot 2019 04 02 at 10

Let’s put this another way. Sloman’s book is called, “Causal Models: How People Think About the World and Its Alternatives,” and it makes sense to me that people think about the world discretely. My point, which I think aligns with those of Loken and Greenland, is that this discrete model of the world is typically inaccurate and misleading, so it’s worth fighting this tendency in ourselves. The point of the above example is that Sloman, who’s writing about how people think, is himself slipping into this error.

One more time

The above is just an example. My point is not to argue with Sloman—his book stands on its own, and his ideas can be valuable even if (or especially if!) his perspective on discrete modeling is different from mine. My point is that discrete thinking is not simply something people do because they have to make decisions, nor is it something that people do just because they have some incentive to take a stance of certainty. So when we’re talking about the problems of deterministic thinking, or premature collapse of the “wave function” of inferential uncertainty, we really are talking about a failure to incorporate enough of a continuous view of the world in our mental model.

P.S. Tomorrow’s post: I think that science is mostly “Brezhnevs.” It’s rare to see a “Gorbachev” who will abandon a paradigm just because it doesn’t do the job. Also, moving beyond naive falsificationism.

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here