Machine Learning Compass

Machine Learning CompassMaxim VolginBlockedUnblockFollowFollowingJan 20Machine learning (ML) is not new at all, it just became extremely popular among developers and their managers recently.

So, the main problem getting started with it for me was that folks who write meaningful things about machine learning usually have very different background than mine and are using tools, techniques and concepts that have been around for decennia, mostly or generally unknown to the rest of the world.

Many things they refer to are so obvious to the authors that they get no explanation whatsoever.

In this article I will try to fill the gaps in the machine learning narrative and address those obvious things that usually get overlooked.

ML on mobile platformsOne of the reasons ML became so insanely popular in the past few years is that it has found its way to mobile platforms.

There are at least two reasons for that: firstly, the hardware became more than adequate for it, and secondly, there happen to be plenty of things to be done with ML on mobile: image & sound recognition, image and sound processing, biometrics, and a bunch of other either time-critical functionality or that requiring extra security, or that which is supposed to be available without internet connection, where sending data to the server for analysis is out of question.

Besides, there are problems that cannot be reliably solved by algorithms: think image recognition, spam filtering.

Perhaps surprisingly, there are also problems that can be solved better/easier by ML than by an algorithm.

On some platforms, ML solutions may have better performance and less energy footprint, because they have hardware support (such as using GPU).

It is important to make distinction between using models and training models.

The former task is usually rather straightforward and at most requires some preprocessing of the data.

The latter is often non-trivial and requires a lot of research, time, and processing power.

Training models is therefore typically not done on mobile platforms.

Trained models are in principle cross-platform, or can be converted to another platform.

Types of MLThere are four principal types of machine learning: supervised (teaching the machine to make the right choice or prediction based on input by feeding it known cases), unsupervised (clustering unknown data by some common attributes), semi-supervised and, finally, reinforcement learning (used primarily in robotics).

Where do I begin?Machine learning is heavily based on data science, which is in turn heavily based on statistics.

So getting the basics right is crucial for understanding the process and choices in the model training process, and the lingo used in model training scripts.

Probably the most straightforward way of getting a hang of it is to follow a tutorial on RStudio and its typical usage on Youtube.

Yes, it will imply learning some R language as you go, but hey it is the domain-specific language for statistics, and machine learning libraries and platforms are sort of imitating it in many ways, at least conceptually.

ML models consist of layers, some of which are often NN (neural networks).

There are dozens of NN types, some more popular than others: specifically, FF, CNN, RNN, LSTM.

There are plenty of other commonly used layers in ML models besides NN, such as AF, ReLU, pooling, dense (fully connected), etc.

Despite of the vast variety of ML architectures, almost all of them consist of common layers, which in turn consist of common building blocks.

That is why there are specialised platforms/APIs that allow building ML models from these common blocks, such as SciKit, Caffe, TensorFlow (TF), Turi Create (TC), Theano, Torch, Keras (which is actually a simplified interface to TensorFlow and Theano), etc.

There are also a number of cloud-based solutions, such as GCP, requiring a connection to the server.

There is also a rather special cloud service called IBM Watson, which allows to automatically distribute updated models to mobile devices via its SDK for offline use.

Show me the stepsTypically, the model training process would involve the following steps:Gather, arrange, resize, pre-format, label, and batch training data.

Design the ML architecture by arranging different layers, such as NN, pooling layers, etc.

Build the training pipeline that will locate, load, and feed training data to the model.

Train the model.

Export (‘freeze’ in TF lingo) model.

Convert frozen model to desired format (such as CoreML).

Getting your hands dirtyMachine learning model training scripts are predominantly written in Python.

The situation is changing, most notably with Apple’s CreateML that allows coding model training process in Swift, but has rather limited functionality at the moment of writing.

So learning Python and its libraries relevant to machine learning is unavoidable.

So, where to start with coding?.It depends.

If you are familiar with Swift, and the current limitations of Apple’s CreateML do not affect your particular use case, go straight to an Xcode playground.

If you are willing to use an online service, such as GCP or IBM Watson, just follow their proprietary guidelines and manuals.

If, however, you are willing to take the plunge and have it all, here’s an easy way to start.

Download and install Anaconda.

It will take care of all Python-related fuss on you computer.

At the moment of writing it supports Python 3.

7, but most machine learning tools will require Python 3.

6 (at the moment of writing), so you will have to create a virtual environment for Python 3.

6.

Assuming that you are using a Mac, go to Terminal window and type conda create -n venv-3.

6 python=3.

6 anaconda, where venv-3.

6 is the name of our new environment.

Now activate it by typing conda activate venv-3.

6.

Now it’s time to get the toys.

Choose those you need, although I have already limited the list to the mainstream tools, with a bias towards intended use with CoreML on iOS platform.

Here’s how you do it (make sure your Python 3.

6.

environment is active): pip install -U turicreate, pip install -U coremltools, pip install -U tfcoreml, pip install -U tensorflow.

This will install Apple’s Turi Create, CoreML tools, Tensorflow to CoreML converter, and Tensorflow itself, respectively.

Once all you need is installed, open Anaconda-Navigator app (from Applications on Mac).

Choose venv-3.

6 (the Python 3.

6 environment) in the drop-down list on top (it is called “Applications on”).

If you are adventurous and willing to learn and practice the basic concepts and techniques of statistics and data science, install RStudio from Anaconda-Navigator (you can open it from there, if it is already installed).

Otherwise, let’s proceed to the IDE you will need to use the Python tools, which is called Jupyter Notebook.

Launch it, and it will open in a web browser.

From there, you can create a new Notebook on your local file system and write and interactively execute chunks of Python code in it.

Happy exploration!.

. More details

Leave a Reply