Small questions about “bigger” issues in Machine learning

“Statistical modeling: Two Cultures”, highlights an important distinction between two very different approaches to statistical modeling:Stochastic techniques, where a distribution is estimated from successive draws; therefore models assume some underlying distribution.

Algorithmic techniques: A function y = f(x) is fitted to observed data.

Logistic regression, as well as linear regression and its various flavors (lasso, ridge, etc.

) are common (supervised) stochastic techniques that are not typically top-of-mind when discussing “machine learning”.

Decision trees, Random forests, Adaboost, Gradient Boosted Trees, SVMs, etc.

are common algorithmic techniques that do not assume any type of underlying data distribution.

These techniques are among the most common examples of what people are thinking of when talking about “machine learning”.

It’s not surprising that these latter techniques tend to be among the most popular on Kaggle, etc.

in part because there’s less to learn about them (in particular, distributional assumptions) before they can be applied well.

Don’t blame your linear regression!!Something that you probably don’t hear about in machine learning and data science courses, but you meet it few times in control theory and signal processing course: “The extended family of linear regression: the Generalized, the weighted and the ordinary.

”Question 2: When to use GLS and WLS?Answer 2: The generalized least squares (GLS) estimator of the coefficients of a linear regression is a generalization of the ordinary least squares (OLS) estimator.

It is used to deal with situations in which the OLS estimator is not BLUE (best linear unbiased estimator) because one of the main assumptions of the Gauss-Markov theorem, namely that of homoskedasticity and absence of serial correlation, is violated.

In such situations, provided that the other assumptions of the Gauss-Markov theorem are satisfied, the GLS estimator is BLUE.

When the error terms are uncorrelated, the GLS estimator is called weighted least squares estimator (WLS).

P.

S.

they are good in practice (not only in theory)Machine learning and control theoryQuestion 3: Is machine learning replacing control theory?Answer 3: No!A confession of a “converted engineer”: Control theory has still one major advantage over machine learning: proofs!.Control engineering is a rigorous discipline and has nice theories of stability, robustness, optimality.

It is true that some CS researchers applied ML techniques to some control problems, which were already solved by classical control algorithms.

But, one cannot prove any stability, robustness properties of the closed-loop system when there is a ML algorithm with hundreds of parameters in the loop.

In general, ML can be handy when there is no model for the problem we are trying to solve, e.

g.

perception problems.

For control applications, we know how cars drive, helicopters fly, robotic arms move; and industry prefers a PID control algorithm with a handful parameters instead of a deep learning algorithm with billions of parameters.

One important task, that is usually ignored by CS academics in their academic environment, is verification and validation of control algorithms.

When there is a PID control algorithm in the loop, it is easy to check if the closed-loop system is stable.

However, such basic property cannot be checked for any known ML algorithm.

.

. More details

Leave a Reply