Is Bias in Machine Learning all Bad?

Tom M.

Mitchell published a paper in 1980:The Need for Biases in Learning Generalizations that states: Learning involves the ability to generalize from past experience in order to deal with new situations that are ”related to” this experience.

The inductive leap needed to deal with new situations seems to be possible only under certain biases for choosing one generalization of the situation over another.

This paper defines precisely the notion of bias in generalization problems, then shows that biases are necessary for the inductive leap.

We have been taught over our years of predictive model building that bias will harm our model.

Bias control needs to be in the hands of someone who can differentiate between the right kind and wrong kind of bias.

His paper states that there are certain biases that help us to create an appropriate model for the problem at hand:While this paper was from 1980, the logic behind it does make us think about how to distinguish between necessary and unnecessary biases that are in our models.

Certainly we have to eliminate biases that would be a hindrance but as we practice and dig deep into the data, we may realize that some biases actually improve our model to what we actually want.

In a follow-up post, I will talk about the types of biases that one should avoid in a machine learning.

  Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.

createElement(script); dsq.

type = text/javascript; dsq.

async = true; dsq.

src = https://kdnuggets.



js; (document.

getElementsByTagName(head)[0] || document.


appendChild(dsq); })();.

. More details

Leave a Reply