A Gentle Introduction to Information Entropy

Because they are two sides of the same coin.

[…] Information theory and machine learning still belong together.

Brains are the ultimate compression and communication systems.

And the state-of-the-art algorithms for both data compression and error-correcting codes use the same tools as machine learning.

— Page v, Information Theory, Inference, and Learning Algorithms, 2003.

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-CourseQuantifying information is the foundation of the field of information theory.

The intuition behind quantifying information is the idea of measuring how much surprise there is in an event.

Those events that are rare (low probability) are more surprising and therefore have more information those events that are common (high probability).

The basic intuition behind information theory is that learning that an unlikely event has occurred is more informative than learning that a likely event has occurred.

— Page 73, Deep Learning, 2016.

Rare events are more uncertain or more surprising and require more information to represent them than common events.

We can calculate the amount of information there is in an event using the probability of the event.

This is called “Shannon information,” “self-information,” or simply the “information,” and can be calculated for a discrete event x as follows:Where log() is the base-2 logarithm and p(x) is the probability of the event x.

The choice of the base-2 logarithm means that the units of the information measure is in bits (binary digits).

This can be directly interpreted in the information processing sense as the number of bits required to represent the event.

The calculation of information is often written as h(); for example:The negative sign ensures that the result is always positive or zero.

Information will be zero when the probability of an event is 1.

0 or a certainty, e.

g.

there is no surprise.

Let’s make this concrete with some examples.

Consider a flip of a single fair coin.

The probability of heads (and tails) is 0.

5.

We can calculate the information for flipping a head in Python using the log2() function.

Running the example prints the probability of the event as 50% and the information content for the event as 1 bit.

If the same coin was flipped n times, then the information for this sequence of flips would be n bits.

If the coin was not fair and the probability of a head was instead 10% (0.

1), then the event would be more rare and would require more than 3 bits of information.

We can also explore the information in a single roll of a fair six-sided dice, e.

g.

the information in rolling a 6.

We know the probability of rolling any number is 1/6, which is a smaller number than 1/2 for a coin flip, therefore we would expect more surprise or a larger amount of information.

Running the example, we can see that our intuition is correct and that indeed, there is more than 2.

5 bits of information in a single roll of a fair die.

Other logarithms can be used instead of the base-2.

For example, it is also common to use the natural logarithm that uses base-e (Euler’s number) in calculating the information, in which case the units are referred to as “nats.

”We can also quantify how much information there is in a random variable.

For example, if we wanted to calculate the information for a random variable X with probability distribution p, this might be written as a function H(); for example:In effect, calculating the information for a random variable is the same as calculating the information for the probability distribution of the events for the random variable.

Calculating the information for a random variable is called “information entropy,” “Shannon entropy,” or simply “entropy“.

It is related to the idea of entropy from physics by analogy, in that both are concerned with uncertainty.

The intuition for entropy is that it is the average number of bits required to represent or transmit an event drawn from the probability distribution for the random variable.

… the Shannon entropy of a distribution is the expected amount of information in an event drawn from that distribution.

It gives a lower bound on the number of bits […] needed on average to encode symbols drawn from a distribution P.

— Page 74, Deep Learning, 2016.

Entropy can be calculated for a random variable X with K discrete states as follows:That is the negative of the sum of the probability of each event multiplied by the log of the probability of each event.

Like information, the log() function uses base-2 and the units are bits.

A natural logarithm can be used instead and the units will be nats.

The lowest entropy is calculated for a random variable that has a single event with a probability of 1.

0, a certainty.

The largest entropy for a random variable will be if all events are equally likely.

We can consider a roll of a fair die and calculate the entropy for the variable.

Each outcome has the same probability of 1/6, therefore it is a uniform probability distribution.

We therefore would expect the average information to be the same information for a single event calculated in the previous section.

Running the example calculates the entropy as more than 2.

5 bits, which is the same as the information for a single outcome.

This makes sense, as the average information is the same as the lower bound on information as all outcomes are equally likely.

If we know the probability for each event, we can use the entropy() SciPy function to calculate the entropy directly.

For example:Running the example reports the same result that we calculated manually.

Calculating the entropy for a random variable provides the basis for other measures such as mutual information (information gain).

It also provides the basis for calculating the difference between two probability distributions with cross-entropy and the KL-divergence.

This section provides more resources on the topic if you are looking to go deeper.

In this post, you discovered a gentle introduction to information entropy.

Specifically, you learned:Do you have any questions?.Ask your questions in the comments below and I will do my best to answer.

Develop Your Understanding of Probability .

with just a few lines of python codeDiscover how in my new Ebook: Probability for Machine LearningIt provides self-study tutorials and end-to-end projects on: Bayes Theorem, Bayesian Optimization, Distributions, Maximum Likelihood, Cross-Entropy, Calibrating Models and much more.

Finally Harness Uncertainty in Your Projects Skip the Academics.

Just Results.

.

. More details

Leave a Reply