Correlation and causation

Sep 2015
14,302
5,087
Brown Township, Ohio
This insane idiot knows the difference between correlation and causation. Stretching correlation to coordinate geometry means when the X and Y axis meet, point impact. Coordinate geometry is 3-D aka Solid Geometry. Proportional navigation is Calculus. PID is a common acronym used in industry today. Proportional, Integral, Differential.
 
Last edited:
Sep 2018
6,679
1,132
cleveland ohio
In statistics, many statistical tests calculate correlations between variables and when two variables are found to be correlated, it is tempting to assume that this shows that one variable causes the other.[1][2] That "correlation proves causation" is considered a questionable cause logical fallacy when two events occurring together are taken to have established a cause-and-effect relationship. This fallacy is also known as cum hoc ergo propter hoc, Latin for "with this, therefore because of this", and "false cause". A similar fallacy, that an event that followed another was necessarily a consequence of the first event, is the post hoc ergo propter hoc (Latin for "after this, therefore because of this.") fallacy.

For example, in a widely studied case, numerous epidemiological studies showed that women taking combined hormone replacement therapy (HRT) also had a lower-than-average incidence of coronary heart disease (CHD), leading doctors to propose that HRT was protective against CHD. But randomized controlled trials showed that HRT caused a small but statistically significant increase in risk of CHD. Re-analysis of the data from the epidemiological studies showed that women undertaking HRT were more likely to be from higher socio-economic groups (ABC1), with better-than-average diet and exercise regimens. The use of HRT and decreased incidence of coronary heart disease were coincident effects of a common cause (i.e. the benefits associated with a higher socioeconomic status), rather than a direct cause and effect, as had been supposed.[3]

As with any logical fallacy, identifying that the reasoning behind an argument is flawed does not imply that the resulting conclusion is false. In the instance above, if the trials had found that hormone replacement therapy does in fact have a negative incidence on the likelihood of coronary heart disease the assumption of causality would have been correct, although the logic behind the assumption would still have been flawed. Indeed, a few go further, using correlation as a basis for testing a hypothesis to try to establish a true causal relationship; examples are the Granger causality test, convergent cross mapping, and Liang-Kleeman information flow[4].[clarification needed]
Correlation does not imply causation - Wikipedia
 
Sep 2018
6,679
1,132
cleveland ohio
Much of scientific evidence is based upon a correlation of variables[22] – they are observed to occur together. Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is often not accepted as a legitimate form of argument.

However, sometimes people commit the opposite fallacy – dismissing correlation entirely. This would dismiss a large swath of important scientific evidence.[22] Since it may be difficult or ethically impossible to run controlled double-blind studies, correlational evidence from several different angles may be useful for prediction despite failing to provide evidence for causation. For example, social workers might be interested in knowing how child abuse relates to academic performance. Although it would be unethical to perform an experiment in which children are randomly assigned to receive or not receive abuse, researchers can look at existing groups using a non-experimental correlational design. If in fact a negative correlation exists between abuse and academic performance, researchers could potentially use this knowledge of a statistical correlation to make predictions about children outside the study who experience abuse, even though the study failed to provide causal evidence that abuse decreases academic performance.[23] The combination of limited available methodologies with the dismissing correlation fallacy has on occasion been used to counter a scientific finding. For example, the tobacco industry has historically relied on a dismissal of correlational evidence to reject a link between tobacco and lung cancer,[24] as did biologist and statistician Ronald Fisher.[25][26][27][28][29][30][31]

Correlation is a valuable type of scientific evidence in fields such as medicine, psychology, and sociology. But first correlations must be confirmed as real, and then every possible causative relationship must be systematically explored. In the end correlation alone cannot be used as evidence for a cause-and-effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes. It is one of the most abused types of evidence, because it is easy and even tempting to come to premature conclusions based upon the preliminary appearance of a correlation.[citation needed]
 
Sep 2018
6,679
1,132
cleveland ohio
Scientists are careful to point out that correlation does not necessarily mean causation. The assumption that A causes B simply because A correlates with B is often not accepted as a legitimate form of argument.
 
Sep 2018
6,679
1,132
cleveland ohio
However, sometimes people commit the opposite fallacy – dismissing correlation entirely. This would dismiss a large swath of important scientific evidence.[22] Since it may be difficult or ethically impossible to run controlled double-blind studies, correlational evidence from several different angles may be useful for prediction despite failing to provide evidence for causation. For example, social workers might be interested in knowing how child abuse relates to academic performance. Although it would be unethical to perform an experiment in which children are randomly assigned to receive or not receive abuse, researchers can look at existing groups using a non-experimental correlational design. If in fact a negative correlation exists between abuse and academic performance, researchers could potentially use this knowledge of a statistical correlation to make predictions about children outside the study who experience abuse, even though the study failed to provide causal evidence that abuse decreases academic performance.[23] The combination of limited available methodologies with the dismissing correlation fallacy has on occasion been used to counter a scientific finding. For example, the tobacco industry has historically relied on a dismissal of correlational evidence to reject a link between tobacco and lung cancer,[24] as did biologist and statistician Ronald Fisher.[25][26][27][28][29][30][31]
 
Sep 2018
6,679
1,132
cleveland ohio
However, sometimes people commit the opposite fallacy – dismissing correlation entirely. This would dismiss a large swath of important scientific evidence.[22] Since it may be difficult or ethically impossible to run controlled double-blind studies, correlational evidence from several different angles may be useful for prediction despite failing to provide evidence for causation. For example, social workers might be interested in knowing how child abuse relates to academic performance. Although it would be unethical to perform an experiment in which children are randomly assigned to receive or not receive abuse, researchers can look at existing groups using a non-experimental correlational design. If in fact a negative correlation exists between abuse and academic performance, researchers could potentially use this knowledge of a statistical correlation to make predictions about children outside the study who experience abuse, even though the study failed to provide causal evidence that abuse decreases academic performance.[23] The combination of limited available methodologies with the dismissing correlation fallacy has on occasion been used to counter a scientific finding. For example, the tobacco industry has historically relied on a dismissal of correlational evidence to reject a link between tobacco and lung cancer,[24] as did biologist and statistician Ronald Fisher.[25][26][27][28][29][30][31]
 
Sep 2018
6,679
1,132
cleveland ohio
However, sometimes people commit the opposite fallacy – dismissing correlation entirely. This would dismiss a large swath of important scientific evidence.[22] Since it may be difficult or ethically impossible to run controlled double-blind studies, correlational evidence from several different angles may be useful for prediction despite failing to provide evidence for causation. For example, social workers might be interested in knowing how child abuse relates to academic performance. Although it would be unethical to perform an experiment in which children are randomly assigned to receive or not receive abuse, researchers can look at existing groups using a non-experimental correlational design. If in fact a negative correlation exists between abuse and academic performance, researchers could potentially use this knowledge of a statistical correlation to make predictions about children outside the study who experience abuse, even though the study failed to provide causal evidence that abuse decreases academic performance.[23] The combination of limited available methodologies with the dismissing correlation fallacy has on occasion been used to counter a scientific finding. For example, the tobacco industry has historically relied on a dismissal of correlational evidence to reject a link between tobacco and lung cancer,[24] as did biologist and statistician Ronald Fisher.[25][26][27][28][29][30][31]
 
Sep 2018
6,679
1,132
cleveland ohio
Fortunately, Statistics Canada and the FBI do make good statistical bedfellows; so far as we can see, their data measures cities in the same way for determining the same data. So: how does Toronto’s homicide rate compare to American cities’?
 
Sep 2018
6,679
1,132
cleveland ohio
Extraordinarily well; Toronto’s numbers absolutely pale in comparison to American cities. Its metropolitan homicide rate in 2006 was lower than every American city with a population above 500,000 (charted above). And of the seventy-two American cities with populations over 250,000, Toronto’s 2006 metropolitan homicide rate (1.8 per 100,000) was lower than every other city except for Plano, Texas—the wealthiest city in the United States—which had a homicide rate of 1.6 per 100,000.