Optimization with SciPy and application ideas to machine learning

For demonstration purpose only, we severely limit the number of iteration to 3.

result = optimize.

minimize(scalar1,x0=-20,method='SLSQP',constraints=cons,options={'maxiter':3})The result is, as expected, not favorable.

fun: -0.

4155114388552631jac: array([-0.

46860977])message: 'Iteration limit exceeded'nfev: 12nit: 4njev: 4status: 9success: Falsex: array([-2.

10190632])Note that the optimization came close to the global minimum, but did not quite reach it — of course, due to the fact that it was not allowed to iterate a sufficient number of times.

Why is this important to consider?That is because of the fact that each iteration equates to computational (and sometimes not computational but actual physical) cost.

This is a business aspect of the optimization process.

In real life, we may not be able to run the optimization for a long period of time if the individual function evaluation costs significant resources.

This kind of scenario arises when the optimization is done not involving simple mathematical evaluation but complex, time-consuming simulation or cost and labor intensive experimentation.

When each evaluation costs money or resources, then not only the choice of the algorithm but also the finer details become important to consider.

Going more complex — multi-variate functionAlthough we considered all essential aspects of solving a standard optimization problem in the preceding sections, the example consisted of a simple single-variable, analytical function.

But it does not have to be the case!SciPy methods work with any Python function — not necessarily a closed-form, single-dimensional mathematical function.

Let us show an example with a multi-valued function.

Maximization of a Gaussian mixtureOften in a chemical or manufacturing process, multiple stochastic sub-processes are combined to give rise to a Gaussian mixture.

It may be desirable to maximize the final resultant process output by choosing the optimum operating points in the individual sub-processes (within certain process limits).

The trick is to use a vector as the input to the objective function and to make sure the objective function still returns a single scalar value.

Also, because the optimization problem here is about maximization of the objective function, we need to change the sign and return the negative of the sum of the Gaussian functions as the result of the objective function.

The same result['x'] stores the optimum setting of the individual processes as a vector.

That is the only difference between optimizing a single-valued and a multivariate function is that we get back a vector instead of a scalar.

x: array([-1.

00017852, 0.

29992313, 2.

10102748])Bounded inputsNeedless to say that we can change the bounds here to reflect practical constraints.

For example, if the sub-process settings can occupy only a certain range of values (some must be positive, some must be negative, etc.

) then the solution will be slightly different — it may not be the global optimum.

Here, the solution is as follows.

This is dictating to push the 3rd sub-process setting to the maximum possible value (zero) while adjusting the other two suitably.

x: array([-1.

00000644e+00, 3.

00115191e-01, -8.

03574200e-17])The constraints for multi-variate optimization are handled in a similar way as shown for the single-variable case.

More detailed documentation and examplesSLSQP is not the only algorithm in SciPy ecosystem capable of handling complex optimization task.

For more detailed documentation and their usage, see the following links,Optimization and Root Finding (scipy.

optimize)SciPy optimization (TutorialsPoint)Practical Optimization RoutinesExtending the process to machine learning domainTo be honest, there is no limit to the level of complexity you can push this approach as long as you can define a proper objective function which generates a scalar value and suitable bounds and constraints matching the actual problem scenario.

Error minimization in machine learningThe crux of almost all machine learning (ML) algorithms is to define a suitable error function (or loss metric), iterate over the data, and find the optimum settings of the parameter of the ML model which minimizes the total error.

Often, the error is a measure of some kind of distance between the model prediction and the ground truth (given data).

Therefore, it is perfectly possible to use SciPy optimization routines to solve an ML problem.

This gives you a deep insight into the actual working of the algorithm as you have to construct the loss metric yourself and not depend on some ready-made, out-of-the-box function.

Hyperparameter optimization in MLTuning parameters and hyperparameters of ML models is often a cumbersome and error-prone task.

Although there are grid-search methods available for searching the best parametric combination, some degree of automation can be easily introduced by running an optimization loop over the parameter space.

The objective function, in this case, has to be some metric of the quality of the ML model prediction (mean-square error, complexity measure, or F1 score for example).

Using machine learning as the function evaluatorIn many situations, you cannot have a nice, closed-form analytical function to use as the objective of an optimization problem.

But who cares about being nice when we have deep learning?Imagine the power of an optimization model which is fed (for its objective function as well as for the constraints) by a multitude of models — different in fundamental nature but standardized with respect to the output format so that they can act in unison.

You are free to choose an analytical function, a deep learning network (perhaps as a regression model), or even a complicated simulation model, and throw them all together into the pit of optimization.

The possibilities are endless!If you have any questions or ideas to share, please contact the author at tirthajyoti[AT]gmail.

com.

Also, you can check the author’s GitHub repositories for other fun code snippets in Python, R, or MATLAB and machine learning resources.

If you are, like me, passionate about machine learning/data science, please feel free to add me on LinkedIn or follow me on Twitter.

Tirthajyoti Sarkar – Sr.

Principal Engineer – Semiconductor, AI, Machine Learning – ON…Georgia Institute of Technology Master of Science – MS, Analytics This MS program imparts theoretical and practical…www.

linkedin.

com.

. More details

Leave a Reply