The Bayes Optimal Classifier is a probabilistic model that makes the most probable prediction for a new example.
It is described using the Bayes Theorem that provides a principled way for calculating a conditional probability.
It is also closely related to the Maximum a Posteriori: a probabilistic framework referred to as MAP that finds the most probable hypothesis for a training dataset.
In practice, the Bayes Optimal Classifier is computationally expensive, if not intractable to calculate, and instead, simplifications such as the Gibbs algorithm and Naive Bayes can be used to approximate the outcome.
In this post, you will discover Bayes Optimal Classifier for making the most accurate predictions for new instances of data.
After reading this post, you will know:Let’s get started.
A Gentle Introduction to the Bayes Optimal ClassifierPhoto by Nate Loper, some rights reserved.
This tutorial is divided into three parts; they are:Recall that the Bayes theorem provides a principled way of calculating a conditional probability.
It involves calculating the conditional probability of one outcome given another outcome, using the inverse of this relationship, stated as follows:The quantity that we are calculating is typically referred to as the posterior probability of A given B and P(A) is referred to as the prior probability of A.
The normalizing constant of P(B) can be removed, and the posterior can be shown to be proportional to the probability of B given A multiplied by the prior.
Or, simply:This is a helpful simplification as we are not interested in estimating a probability, but instead in optimizing a quantity.
A proportional quantity is good enough for this purpose.
For more on the topic of Bayes Theorem, see the post:Now that we are up to speed on Bayes Theorem, let’s also take a look at the Maximum a Posteriori framework.
Machine learning involves finding a model (hypothesis) that best explains the training data.
There are two probabilistic frameworks that underlie many different machine learning algorithms.
They are:The objective of both of these frameworks in the context of machine learning is to locate the hypothesis that is most probable given the training dataset.
Specifically, they answer the question:What is the most probable hypothesis given the training data?Both approaches frame the problem of fitting a model as optimization and involve searching for a distribution and set of parameters for the distribution that best describes the observed data.
MLE is a frequentist approach, and MAP provides a Bayesian alternative.
A popular replacement for maximizing the likelihood is maximizing the Bayesian posterior probability density of the parameters instead.
— Page 306, Information Theory, Inference and Learning Algorithms, 2003.
Given the simplification of Bayes Theorem to a proportional quantity, we can use it to estimate the proportional hypothesis and parameters (theta) that explain our dataset (X), stated as:Maximizing this quantity over a range of theta solves an optimization problem for estimating the central tendency of the posterior probability (e.
g.
the model of the distribution).
As such, this technique is referred to as “maximum a posteriori estimation,” or MAP estimation for short, and sometimes simply “maximum posterior estimation.
”For more on the topic of Maximum a Posteriori, see the post:Now that we are familiar with the MAP framework, we can take a closer look at the related concept of the Bayes optimal classifier.
The Bayes optimal classifier is a probabilistic model that makes the most probable prediction for a new example, given the training dataset.
This model is also referred to as the Bayes optimal learner, the Bayes classifier, Bayes optimal decision boundary, or the Bayes optimal discriminant function.
Specifically, the Bayes optimal classifier answers the question:What is the most probable classification of the new instance given the training data?This is different from the MAP framework that seeks the most probable hypothesis (model).
Instead, we are interested in making a specific prediction.
In general, the most probable classification of the new instance is obtained by combining the predictions of all hypotheses, weighted by their posterior probabilities.
— Page 175, Machine Learning, 1997.
The equation below demonstrates how to calculate the conditional probability for a new instance (vi) given the training data (D), given a space of hypotheses (H).
Where vj is a new instance to be classified, H is the set of hypotheses for classifying the instance, hi is a given hypothesis, P(vj | hi) is the posterior probability for vi given hypothesis hi, and P(hi | D) is the posterior probability of the hypothesis hi given the data D.
Selecting the outcome with the maximum probability is an example of a Bayes optimal classification.
Any model that classifies examples using this equation is a Bayes optimal classifier and no other model can outperform this technique, on average.
Any system that classifies new instances according to [the equation] is called a Bayes optimal classifier, or Bayes optimal learner.
No other classification method using the same hypothesis space and same prior knowledge can outperform this method on average.
— Page 175, Machine Learning, 1997.
We have to let that sink in.
It is a big deal.
It means that any other algorithm that operates on the same data, the same set of hypotheses, and same prior probabilities cannot outperform this approach, on average.
Hence the name “optimal classifier.
”Although the classifier makes optimal predictions, it is not perfect given the uncertainty in the training data and incomplete coverage of the problem domain and hypothesis space.
As such, the model will make errors.
These errors are often referred to as Bayes errors.
The Bayes classifier produces the lowest possible test error rate, called the Bayes error rate.
[…] The Bayes error rate is analogous to the irreducible error …— Page 38, An Introduction to Statistical Learning with Applications in R, 2017.
Because the Bayes classifier is optimal, the Bayes error is the minimum possible error that can be made.
Further, the model is often described in terms of classification, e.
g.
the Bayes Classifier.
Nevertheless, the principle applies just as well to regression: that is, predictive modeling problems where a numerical value is predicted instead of a class label.
It is a theoretical model, but it is held up as an ideal that we may wish to pursue.
In theory we would always like to predict qualitative responses using the Bayes classifier.
But for real data, we do not know the conditional distribution of Y given X, and so computing the Bayes classifier is impossible.
Therefore, the Bayes classifier serves as an unattainable gold standard against which to compare other methods.
— Page 39, An Introduction to Statistical Learning with Applications in R, 2017.
Because of the computational cost of this optimal strategy, we instead can work with direct simplifications of the approach.
Two of the most commonly used simplifications use a sampling algorithm for hypotheses, such as Gibbs sampling, or to use the simplifying assumptions of the Naive Bayes classifier.
For more on the topic of Naive Bayes, see the post:Nevertheless, many nonlinear machine learning algorithms are able to make predictions are that are close approximations of the Bayes classifier in practice.
Despite the fact that it is a very simple approach, KNN can often produce classifiers that are surprisingly close to the optimal Bayes classifier.
— Page 39, An Introduction to Statistical Learning with Applications in R, 2017.
This section provides more resources on the topic if you are looking to go deeper.
In this post, you discovered the Bayes Optimal Classifier for making the most accurate predictions for new instances of data.
Specifically, you learned:Do you have any questions?.Ask your questions in the comments below and I will do my best to answer.
Develop Your Understanding of Probability .
with just a few lines of python codeDiscover how in my new Ebook: Probability for Machine LearningIt provides self-study tutorials and end-to-end projects on: Bayes Theorem, Bayesian Optimization, Distributions, Maximum Likelihood, Cross-Entropy, Calibrating Models and much more.
Finally Harness Uncertainty in Your Projects Skip the Academics.
Just Results.
.. More details