Correlation does not imply causation
"Correlation does not imply causation" is a phrase used in statistics to emphasize that a correlation between two variables does not imply that one causes the other. Many statistical tests calculate correlation between variables. A few go further, using correlation as a basis for testing a hypothesis of a true causal relationship; examples are the Granger causality test and convergent cross mapping.
The counter-assumption, that "correlation proves causation," is considered a questionable cause logical fallacy in that two events occurring together are taken to have a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for "with this, therefore because of this," and "false cause." A similar fallacy, that an event that follows another was necessarily a consequence of the first event, is sometimes described as post hoc ergo propter hoc (Latin for "after this, therefore because of this.").
For example, in a widely studied case, numerous epidemiological studies showed that women taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better-than-average diet and exercise regimens. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than a direct cause and effect, as had been supposed.
As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not imply that the resulting conclusion is false. In the instance above, if the trials had found that hormone replacement therapy does in fact have a negative incidence on the likelihood of coronary heart disease the assumption of causality would have been correct, although the logic behind the assumption would still have been flawed.
In logic, the technical use of the word "implies" means "is a sufficient circumstance for". This is the meaning intended by statisticians when they say causation is not certain. Indeed, p implies q has the technical meaning of the material conditional: if p then q symbolized as p → q. That is "if circumstance p is true, then q follows." In this sense, it is always correct to say "Correlation does not imply causation."
However, in casual use, the word "implies" loosely means suggests rather than requires. The idea that correlation and causation are connected is certainly true; where there is causation, there is a likely correlation. Indeed, correlation is used when inferring causation; the important point is that such inferences are made after correlations are confirmed as real and all causational relationship are systematically explored using large enough data sets.
Edward Tufte, in a criticism of the brevity of "correlation does not imply causation", deprecates the use of "is" to relate correlation and causation (as in "Correlation is not causation"), citing its inaccuracy as incomplete. While it is not the case that correlation is causation, simply stating their nonequivalence omits information about their relationship. Tufte suggests that the shortest true statement that can be made about causality and correlation is one of the following:
- "Empirically observed covariation is a necessary but not sufficient condition for causality."
- "Correlation is not causation but it sure is a hint."
For any two correlated events, A and B, the following relationships are possible:
- A causes B; (direct causation)
- B causes A; (reverse causation)
- A and B are consequences of a common cause, but do not cause each other;
- A causes B and B causes A (bidirectional or cyclic causation);
- A causes C which causes B (indirect causation);
- There is no connection between A and B; the correlation is a coincidence.
Thus there can be no conclusion made regarding the existence or the direction of a cause-and-effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause-and-effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained.
Examples of illogically inferring causation from correlation
B causes A (reverse causation or reverse causality)
- Example 1
- The faster windmills are observed to rotate, the more wind is observed to be.
- Therefore wind is caused by the rotation of windmills. (Or, simply put: windmills, as their name indicates, are machines used to produce wind.)
In this example, the correlation (simultaneity) between windmill activity and wind velocity does not imply that wind is caused by windmills. It is rather the other way around, as suggested by the fact that wind doesn’t need windmills to exist, while windmills need wind to rotate. Wind can be observed in places where there are no windmills or non-rotating windmills—and there are good reasons to believe that wind existed before the invention of windmills.
- Example 2
- When a country's debt rises above 90% of GDP, growth slows.
- Therefore, high debt causes slow growth.
Third factor C (the common-causal variable) causes both A and B
All of these examples deal with a lurking variable, which is simply a hidden third variable that affects both causes of the correlation. A difficulty often also arises where the third factor, though fundamentally different from A and B, is so closely related to A and/or B as to be confused with them or very difficult to scientifically disentangle from them (see Example 4).
- Example 1
- Sleeping with one's shoes on is strongly correlated with waking up with a headache.
- Therefore, sleeping with one's shoes on causes headache.
The above example commits the correlation-implies-causation fallacy, as it prematurely concludes that sleeping with one's shoes on causes headache. A more plausible explanation is that both are caused by a third factor, in this case going to bed drunk, which thereby gives rise to a correlation. So the conclusion is false.
- Example 2
- Young children who sleep with the light on are much more likely to develop myopia in later life.
- Therefore, sleeping with the light on causes myopia.
This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999 issue of Nature, the study received much coverage at the time in the popular press. However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia. It did find a strong link between parental myopia and the development of child myopia, also noting that myopic parents were more likely to leave a light on in their children's bedroom. In this case, the cause of both conditions is parental myopia, and the above-stated conclusion is false.
- Example 3
- As ice cream sales increase, the rate of drowning deaths increases sharply.
- Therefore, ice cream consumption causes drowning.
The aforementioned example fails to recognize the importance of time and temperature in relationship to ice cream sales. Ice cream is sold during the hot summer months at a much greater rate than during colder times, and it is during these hot summer months that people are more likely to engage in activities involving water, such as swimming. The increased drowning deaths are simply caused by more exposure to water-based activities, not ice cream. The stated conclusion is false.
- Example 4
- A hypothetical study shows a relationship between test anxiety scores and shyness scores, with a statistical r value (strength of correlation) of +.59.
- Therefore, it may be simply concluded that shyness, in some part, causally influences test anxiety.
However, as encountered in many psychological studies, another variable, a "self-consciousness score", is discovered that has a sharper correlation (+.73) with shyness. This suggests a possible "third variable" problem, however, when three such closely related measures are found, it further suggests that each may have bidirectional tendencies (see "bidirectional variable", above), being a cluster of correlated values each influencing one another to some extent. Therefore, the simple conclusion above may be false.
- Example 5
- Since the 1950s, both the atmospheric CO2 level and obesity levels have increased sharply.
- Hence, atmospheric CO2 causes obesity.
Richer populations tend to eat more food and produce more CO2.
- Example 6
- HDL ("good") cholesterol is negatively correlated with incidence of heart attack.
- Therefore, taking medication to raise HDL decreases the chance of having a heart attack.
Further research has called this conclusion into question. Instead, it may be that other underlying factors, like genes, diet and exercise, affect both HDL levels and the likelihood of having a heart attack; it is possible that medicines may affect the directly measurable factor, HDL levels, without affecting the chance of heart attack.
Bidirectional causation: A causes B, and B causes A
Causality is not necessarily one-way; in a predator-prey relationship, predator numbers affect prey numbers, but prey numbers, i.e. food supply, also affect predator numbers.
The relationship between A and B is coincidental
The two variables aren't related at all, but correlate by chance. The more things are examined, the more likely it is that two unrelated variables will be appear to be related. For example, the result of the last home game by the Washington Redskins prior to the presidential election predicted the outcome of every presidential election from 1936 to 2000 inclusive, despite the fact that the outcomes of football games had nothing to do with the outcome of the popular election. This streak was finally broken in 2004 (or 2012 using an alternative formulation of the original rule). A collection of such coincidences finds that for example, there is a 99.79% correlation for the period 1999-2009 between U.S. spending on science, space, and technology; and the number of suicides by suffocation, strangulation, and hanging.
In academia, there are a significant number of theories on causality; The Oxford Handbook of Causation (Beebee, Hitchcock & Menzies 2009) encompasses 770 pages. Among the more influential theories within philosophy are Aristotle's Four causes and Al-Ghazali's occasionalism. David Hume argued that causality is based on experience, and experience similarly based on the assumption that the future models the past, which in turn can only be based on experience – leading to circular logic. In conclusion, he asserted that causality is not based on actual reasoning: only correlation can actually be perceived. Immanuel Kant, according to Beebee, Hitchcock & Menzies (2009), held that "a causal principle according to which every event has a cause, or follows according to a causal law, cannot be established through induction as a purely empirical claim, since it would then lack strict universality, or necessity".
Outside the field of philosophy, theories of causation can be identified in classical mechanics, statistical mechanics, quantum mechanics, spacetime theories, biology, social sciences, and law. To establish a correlation as causal within physics, it is normally understood that the cause and the effect must connect through a local mechanism (cf. for instance the concept of impact) or a nonlocal mechanism (cf. the concept of field), in accordance with known laws of nature.
From the point of view of thermodynamics, universal properties of causes as compared to effects have been identified through the Second law of thermodynamics, confirming the ancient, medieval and Cartesian view that "the cause is greater than the effect" for the particular case of thermodynamic free energy. This, in turn, is challenged by popular interpretations of the concepts of nonlinear systems and the butterfly effect, in which small events cause large effects due to, respectively, unpredictability and an unlikely triggering of large amounts of potential energy.
Causality construed from counterfactual states
Intuitively, causation seems to require not just a correlation, but a counterfactual dependence. Suppose that a student performed poorly on a test and guesses that the cause was his not studying. To prove this, one thinks of the counterfactual – the same student writing the same test under the same circumstances but having studied the night before. If one could rewind history, and change only one small thing (making the student study for the exam), then causation could be observed (by comparing version 1 to version 2). Because one cannot rewind history and replay events after making small controlled changes, causation can only be inferred, never exactly known. This is referred to as the Fundamental Problem of Causal Inference – it is impossible to directly observe causal effects.
A major goal of scientific experiments and statistical methods is to approximate as best possible the counterfactual state of the world. For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation.
Well-designed experimental studies replace equality of individuals as in the previous example by equality of groups. The objective is to construct two groups that are similar except for the treatment that the groups receive. This is achieved by selecting subjects from a single population and randomly assigning them to two or more groups. The likelihood of the groups behaving similarly to one another (on average) rises with the number of subjects in each group. If the groups are essentially equivalent except for the treatment they receive, and a difference in the outcome for the groups is observed, then this constitutes evidence that the treatment is responsible for the outcome, or in other words the treatment causes the observed effect. However, an observed effect could also be caused "by chance", for example as a result of random perturbations in the population. Statistical tests exist to quantify the likelihood of erroneously concluding that an observed difference exists when in fact it does not (for example see P-value).
Causality predicted by an extrapolation of trends
When experimental studies are impossible and only pre-existing data are available, as is usually the case for example in economics, regression analysis can be used. Factors other than the potential causative variable of interest are controlled for by including them as regressors in addition to the regressor representing the variable of interest. False inferences of causation due to reverse causation (or wrong estimates of the magnitude of causation due the presence of bidirectional causation) can be avoided by using explanators (regressors) that are necessarily exogenous, such as physical explanators like rainfall amount (as a determinant of, say, futures prices), lagged variables whose values were determined before the dependent variable's value was determined, instrumental variables for the explanators (chosen based on their known exogeneity), etc. See Causality#Statistics and economics. Spurious correlation due to mutual influence from a third, common, causative variable, is harder to avoid: the model must be specified such that there is a theoretical reason to believe that no such underlying causative variable has been omitted from the model. In particular, underlying time trends of both the dependent variable and the independent (potentially causative) variable must be controlled for by including time as another independent variable.
Use of correlation as scientific evidence
Much of scientific evidence is based upon a correlation of variables – they are observed to occur together. Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is often not accepted as a legitimate form of argument.
However, sometimes people commit the opposite fallacy – dismissing correlation entirely, as if it does not suggest causation at all. This would dismiss a large swath of important scientific evidence. Since it may be difficult or ethically impossible to run controlled double-blind studies, correlational evidence from several different angles may be the strongest causal evidence available. The combination of limited available methodologies with the dismissing correlation fallacy has on occasion been used to counter a scientific finding. For example, the tobacco industry has historically relied on a dismissal of correlational evidence to reject a link between tobacco and lung cancer.
Correlation is a valuable type of scientific evidence in fields such as medicine, psychology, and sociology. But first correlations must be confirmed as real, and then every possible causative relationship must be systematically explored. In the end correlation can be used as powerful evidence for a cause-and-effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes. But it is also one of the most abused types of evidence, because it is easy and even tempting to come to premature conclusions based upon the preliminary appearance of a correlation.
- Closely related fallacies:
- Ecological fallacy:
- Spurious relationship
- Texas sharpshooter fallacy
- Post hoc ergo propter hoc
- Fundamental attribution error
- Similar fallacies:
- Coincidence#Coincidence and causality
- Alignments of random points • Ley line • Bible code
- French paradox
- Misuse of statistics
- Confirmation bias
- Affirming the consequent
- Jumping to conclusions
- Mierscheid law
- Base-rate fallacy
- Cross-sectional study#Weaknesses of aggregated data
- Attribution bias
- Mean world syndrome
- Remote viewing
- Statistical concepts:
- Design of experiments
- Validity (statistics)
- Statistical power
- Sensitivity and specificity
- Estimation theory
- Selection bias
- Heuristics in judgment and decision-making
- Representativeness heuristic
- Causal inference
- Chain reaction
- Epidemiological method
- Pirates and global warming
- Four causes
- Molecular pathology
- Molecular pathological epidemiology
- Normally distributed and uncorrelated does not imply independent
- Observational study
- Cargo Cult
- Tufte 2006, p. 5
- Aldrich, John (1995). "Correlations Genuine and Spurious in Pearson and Yule" (PDF). Statistical Science. 10 (4): 364–376. doi:10.1214/ss/1177009870. JSTOR 2246135. Archived from the original (PDF) on February 19, 2006.
- Lawlor DA, Davey Smith G, Ebrahim S (June 2004). "Commentary: the hormone replacement-coronary heart disease conundrum: is this the death of observational epidemiology?". Int J Epidemiol. 33 (3): 464–7. doi:10.1093/ije/dyh124. PMID 15166201.
- Tufte 2006, p. 4.
- "Reinhart-Rogoff, Continued".
- Quinn, Graham E.; Shin, Chai H.; Maguire, Maureen G.; Stone, Richard A. (May 1999). "Myopia and ambient lighting at night". Nature. 399 (6732): 113–4. doi:10.1038/20094. PMID 10335839.
- CNN, May 13, 1999. Night-light may lead to nearsightedness
- Ohio State University Research News, March 9, 2000. Night lights don't lead to nearsightedness, study suggests
- Zadnik, Karla; Jones, Lisa A.; Irvin, Brett C.; Kleinstein, Robert N.; Manny, Ruth E.; Shin, Julie A.; Mutti, Donald O. (2000). "Vision: Myopia and ambient night-time lighting". Nature. 404 (6774): 143–144. doi:10.1038/35004661. PMID 10724157.
- Gwiazda, J.; Ong, E.; Held, R.; Thorn, F. (2000). "Vision: Myopia and ambient night-time lighting". Nature. 404 (6774): 144–144. doi:10.1038/35004663. PMID 10724158.
- Stone, Richard A.; Maguire, Maureen G.; Quinn, Graham E. (2000). "Vision: reply: Myopia and ambient night-time lighting". Nature. 404 (6774): 144–144. doi:10.1038/35004665.
- Carducci, Bernardo J. (2009). The Psychology of Personality: Viewpoints, Research, and Applications (2nd ed.). John Wiley & Sons. ISBN 978-1-4051-3635-8.
- Ornish, Dean. "Cholesterol: The good, the bad, and the truth" (retrieved 3 June 2011)
- Beebee, Hitchcock & Menzies 2009
- Morris, William Edward (2001). "David Hume". The Stanford Encyclopedia of Philosophy.
- Lloyd, A.C. (1976). "The principle that the cause is greater than its effect". Phronesis. 21 (2): 146–156. doi:10.1163/156852876x00101. JSTOR 4181986.
- Holland, Paul W. (1986). "Statistics and Causal Inference". Journal of the American Statistical Association. 81 (396): 945–960. doi:10.1080/01621459.1986.10478354.
- Pearl, Judea (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press. ISBN 9780521773621.
- Novella. "Evidence in Medicine: Correlation and Causation". Science and Medicine. Science-Based Medicine.
- Beebee, Helen; Hitchcock, Christopher; Menzies, Peter (2009). The Oxford Handbook of Causation. Oxford University Press. ISBN 978-0-19-162946-4.
- Tufte, Edward R. (2006). "The Cognitive Style of PowerPoint: Pitching Out Corrupts Within" (2nd ed.). Cheshire, Connecticut: Graphics Press. ISBN 0-9613921-5-0.
- "The Art and Science of cause and effect": a slide show and tutorial lecture by Judea Pearl
- Causal inference in statistics: An overview, by Judea Pearl (September 2009)
- Spurious Correlations, site searching and showing such correlations.
- What Everyone Should Know about Statistical Correlation