This is a very nice sanity check, and the formula also gives us a P-Value.

6.

Evaluating our P-ValueWhen we are looking to either accept or reject our Null hypothesis we want to consult our p-value.

Earlier in the tutorial, we defined our alpha as .

05.

This means that we expect that given our null hypothesis is true that our test will be correct 95% of the time where the other 5% is incorrect because of random chance.

Since this is a two-tailed test, which we defined above, we will split the alpha error in half and put it on both sides of our distribution.

This means that the p-value we are looking for is .

025, so 2.

5 percent.

If our p-value is less than or equal to our anticipated error then we would reject our Null hypothesis.

The logic behind is this a little counter-intuitive.

The p-value is the likelihood of us getting a RANDOM result outside of our 95% confidence interval.

If the p-value is smaller than our alpha, that means it is unlikely that the result outside of our 95% was random, meaning that it was significant and shouldn’t be ignored as an error.

Whereas if the p-value is higher than our alpha it means it likely that the result outside of our 95% interval is random, so we shouldn’t freak out and will fail to reject(aka keep) our null hypothesis.

Conclusion:For our example, since our p-value is ZERO that means we are going to accept our alternative hypothesis and conclude that there is a difference in the overall skill of players from either England or Spain.

It appears, according to the FIFA player 19 data set that Spanish players are significantly better than the players from England.

So if you are trying to become a professional soccer player, or you are looking to have your child be the next Lionel Messi, maybe Spain is where you should train them.

Some Side Notes:Effect Size vs Power/Sample Size: Effect Size and Power have a negative linear relationship.

Power helps you determine the size of your sample in order to satisfy its requirements.

The number of samples that you need according to your power will vary depending on the effect size.

For example, if you have a large effect size, then the number of samples needed in order to satisfy your power requirements will be less.

Like in our example above, if I took the power of .

90 instead of 1, the minimum sample size would have been around 30.

This is because the effect size of the two sample means was very close to 1.

P-Value vs Effect Size: When assessing the outcome of a hypothesis test, the p-value is a useful tool.

The effect size may be predictive of the outcome of a p-value.

If the effect size is very high, like in our test — it is logical to think that there is a significant statistical difference between the means of the two sample groups.

If there is a statistically significant difference, then the p-value will be very close to zero, meaning that you would reject the Null Hypothesis.

So you should keep in mind if you run a test with an effect size between .

6 and 1.

0 and the p-value comes out higher than your alpha, you may want to re-visit your test and make sure no mistakes were made.

(I know this because I made this exact mistake while I was writing this tutorial)Bootstrapping: Bootstrapping isn’t always necessary, in fact, it is a method that has only been adopted because computers can create new data almost instantaneously.

If you have plenty of data that is normally distributed then you don’t have to bootstrap.

Bootstrapping is a tool used when data is limited and may not be perfectly distributed.

I hope that this tutorial helped.

I covered a couple of different strategies that are not completely necessary for a hypothesis test, but I thought that covering the more complex ideas would be helpful.

If you are curious about the code and want to check out my repo it's linked here!.Connect with me on LinkedIn!Also, check out a recent project I was involved in.

We used soccer data to draw some conclusions on home-field advantage, formation optimization and team attributes by league!References:P-Value Definition: https://www.

statsdirect.

com/help/basics/p_values.

htmZ-Value Definiton: https://www.

statisticshowto.

datasciencecentral.

com/probability-and-statistics/z-score/Power: https://www.

statisticsteacher.

org/2017/09/15/what-is-power/Type Errors: https://en.

wikipedia.

org/wiki/Type_I_and_type_II_errors.