I hope not, I suspect that it is reasonable to expect that chocolate does not cause one to be a Nobel prize winner.
So let us extract two variables from this statement.
B— Being a Nobel prize winner, A— consuming chocolate.
The causal diagram for this statement would basically look like this:The arrow meaning that A causes B.
As you can see, this is a very primitive causal diagram.
Now we can come to the point, although we have strong correlation between chocolate consumption and Nobel prize winning, we can ask ourselves, is there some other variable, C, such as the country’s wealth that causes both Nobel prize winning and chocolate consumption, or is it the country’s educational system that causes both and so on.
Let us imagine, as indeed is the case, that there is a common cause C for both.
Then the causal diagram looks like this:Now we can mention Reichenbach’s common cause principle which states that if variables A and B have a common cause, C, then when we condition on C, the correlation between these variables is wiped out, meaning that the conditional distributions of the random variables conditioning on the common cause become independent.
Nice enough.
So actually the causal diagram that we should be looking at is the following:This is what causality is all about, establishing that there is not a common cause that makes A and B look like as if A causes B.
This practice has well been established though in the medical community in the form of medical trials, well before people started talking about causal inference.
So how do we establish this?.Firstly, we are going to call a medical trial with a more general, useful name.
We are going to call it a controlled experiment.
Controlled experiments are nice, we can act upon a variable directly and see how our other variables change in our causal diagram.
In a medical trial, this would be taking groups of people 1 and 2, 1 group 1 taking the placebo and group 2 taking the actual medicine to the sickness and observing the results.
Naturally, in medical trials we want these people to come from the same distribution, i.
e.
to be similar.
Actually, ideally we want them to be the same, this would be the perfect medical trial that would eliminate any other potential common causes, but this is unrealistic to expect, a perfect controlled experiment.
Now you observe the results of the groups and determine based on some confidence if the medicine is efficient in curing the disease.
In causal language, this is called an intervention.
If we can take a variable and set it manually to a value, without changing anything else.
This is basically stating we take the same people before we applied the placebo and the medicine and then apply both, to see if the disease has been cured by the medicine or something else.
Generally, people find it difficult to differentiate between intervention and setting a probability of an event’s realization to 1.
The difference is that intervention results in two different causal diagrams on which we can calculate our probabilities and reach a conclusion about the actual causal structure of the diagram.
Luckily, we have Prof.
Judea Pearl to thank for inventing causal calculus, for which he has received the prestigious Turing award and will probably be known further on as the founder of modern causal inference.
I would suggest reading his books on causality for diving more deeply into the topic:1.
The Book of Why2.
Causality: Models, Reasoning and Inference3.
Causal Inference in Statistics: A PrimerI personally think that the first one is good for a general audience since it also gives a good glimpse into the history of statistics and causality and then goes a bit more into the theory behind causal inference.
Till now we were talking about a bit of statistics but the question remains how does this actually reflect on AI algorithms, that is, machine learning algorithms.
The connection is rather straight forward.
The current approaches that we use cannot distinguish cause from effect by learning from the data since we are mostly talking about probability distributions in machine learning and learning models that basically see things occur at the same time and automatically assume that one is predictive of the other.
Personally, I cannot imagine that these models can or will be deployed safely in the real world.
Especially in the case if we want to develop something as Schmidhueber’s artificial scientist, we need to be able to distinguish between cause and effect and reason about them.
Photo by Josh Riemer on UnsplashIn science, we constantly need to accept or reject hypotheses to reach conclusions.
This is why causal inference is not just good to have, it is a necessity if we want to reach valid conclusions.
There are countless examples where studies have been made that resulted in false conclusions as a consequence of not being able to use statistics properly, as shown in this article or this.
I am confident that the field is going to cause a scientific renaissance in the community.
As a takeaway from this article, remember the following which you hopefully already know:Correlation does not imply causationTill next time!Photo by Volkan Olmez on Unsplash.. More details