Fat tails and the t test

The t statistic is where y bar is the sample average, μ0 is the mean under the null hypothesis (μ0 = 0 in our example), s is the sample standard deviation, and n is the sample size.

As distributions become fatter in the tails, the sample standard deviation increases.

This means the denominator in the t statistic gets larger and so the t statistic gets smaller.

The smaller the t statistic, the greater the probability that the absolute value of a t random variable is greater than the statistic, and so the larger the p-value.

t statistic, t distribution, t test There are a lot of t‘s floating around in this post.

I’ll finish by clarifying what the various t things are.

The t statistic is the thing we compute from our data, given by the expression above.

It is called a t statistic because if the hypotheses of the test are satisfied, this statistic has a t distribution with n-1 degrees of freedom.

The t test is a hypothesis test based on the t statistic and its distribution.

So the t statistic, the t distribution, and the t test are all closely related.

The t family of probability distributions is a convenient example of a family of distributions whose tails get heavier or lighter depending on a parameter.

That’s why in the simulation we drew samples from a t distribution.

We didn’t need to, but it was convenient.

We would get similar results if we sampled from some other distribution whose tails get thicker, and so variance increases, as we vary some parameter.

Related posts Robustness of the two-sample t test Diagram of probability distributions.

. More details

Leave a Reply