4 Steps for Product Managers to Get Started with Machine Learning

4 Steps for Product Managers to Get Started with Machine LearningFrameworks and examples from 15 product experts at leading technology companiesBrian PolidoriBlockedUnblockFollowFollowingJun 13Image by Gerd Altmann from PixabayBackgroundThis article is part of a larger independent study (see posts below) about how product managers are incorporating machine learning into their products.

It was conducted by Ryan Dingler and myself while MBAs at UC Berkeley with the help of Vince Law as our faculty advisor.

The research is an attempt to understand how product managers are designing, planning, and building ML-enabled Products.

To develop this understanding, we interviewed 15 product development experts at various technology companies.

Of the 15 companies represented, 14 have a market capitalization greater than $1 billion, 11 are publicly traded, 6 are B2C, and 9 are B2B.

The product managers guide to ML series:Identifying Opportunities for Using Machine Learning as a Product Manager4 Steps for Product Managers to Get Started with Machine LearningCreating a Data Strategy for Machine Learning as a Product ManagerPrinciples for Product Managers on How to Manage a Machine Learning ModelBefore you get started with machine learningAfter identifying a problem that machine learning (ML) might help solve, it’s essential to take some steps before jumping into model development with your team.

From our research, most product teams follow a structured process for framing and evaluating ML problems.

We share the four most important of these learnings in this post.

Step 1: Write an Machine Learning HypothesisSometimes product teams start their ML projects with just a general sense of what they expect to achieve.

The problem with this unstructured approach is that you’ll learn less and won’t create frameworks for future use cases.

Without a firm hypothesis, you also run an increased risk (especially with smaller datasets) of finding a random feature that correlates with your target variable just by chance.

This correlation may lead to the false belief that the random feature is relevant in your model, which could cause the model to generalize poorly in production.

A reasonable hypothesis will have all of the following parts:Change that you are testingDesired outcome (high-level)Success metricsModel output (numbers, label, clusters, etc.

)TargetPredictors (high-level)Example hypothesisImproving the search ranking in Dropbox with ML [1.

CHANGE] will allow users can find the correct file [2.

OUTCOME] in 15% less time [3.

METRIC].

The model will score each possible file [4.

MODEL OUTPUT] by using files recently shared with the user and recently viewed files [6.

PREDICTORS] to predict the file that the user ultimately selects [5.

TARGET].

Note: Take caution if skipping the hypothesis step.

We spoke with some teams that didn’t create a solid hypothesis beforehand.

Their hypothesis was often: “X competitor has done this.

Why can’t we?” Even if your competitor has done it, we’d still recommend creating a simple hypothesis using the above framework.

Step 2: What Data do I Need?Photo by Franki ChamakiGenerally speaking, your data needs to be informative and pertain to the problem being solved, but for ML, data also needs to be abundant.

A simple rule of thumb is to have at least thousands of rows of data for linear models and hundreds of thousands for a neural network.

If you don’t have the data, consider ways to acquire the right data or stick to a heuristic-based approach.

Assuming that there’s enough data, the data still has to have some pattern.

Algorithms learn from these patterns.

You don’t have to know the precise pattern before you get started, but you should be able to articulate it qualitatively or have a gut feeling.

In the Dropbox search example, the product team probably had a gut feeling that files recently shared with the user related to the file, that the user ultimately selected.

However, the team doesn’t need to know the exact relationship or pattern before starting the project.

Lastly, remember that even with abundant data, not all problems are solvable.

As one social media PM we interviewed put it:“Having the data and getting value from it are not the same thing.

”Key pointsThe data needs to help answer your hypothesis and have some assumed pattern.

Even with the right data, you may not end up with a working model.

Step 3: Start with HeuristicsML models require a lot of data, are complex, and can take a long time to develop and test before becoming production ready.

Therefore, it’s often best to start with a simple set of heuristics before trying ML.

A heuristic is a rule that provides a shortcut to solving difficult problems.

Heuristics are fast to build, relatively simple to implement, and easy to understand.

These rules can also act as a shortcut to testing your hypothesis without spending absorbent amounts of time perfecting models.

In a way, heuristics can serve as a prototyping tool before building out the full-featured set of ML models.

In curating a simple newsfeed, if the desired outcome of your hypothesis is to increase user engagement, this might be accomplished through heuristics that surface essential content to users.

For instance, any post with more than five likes an hour will appear on the newsfeed.

Of course, this would not be a very accurate proxy for virality so you might want to include additional heuristics for the number of comments or shares.

In many cases, the project may stop with a simple set of heuristics (no need for ML).

Usually, this is the case if data isn’t abundant enough for ML or if the heuristics are simple, perform well, and are easy to maintain.

Many of the companies we spoke with started to consider ML only after their heuristics became too complicated.

Over complication can happen as you try to build more and more heuristics that are interdependent, overlapping, and personalized.

Key pointsHeuristics can be a great shortcut to testing your hypothesis.

Consider ML if your heuristics become too complicated to maintain or if performance lags.

Step 4: Risk management strategiesFrom an image recognition algorithm labeling something inappropriately to a chatbot learning to become culturally inappropriate, there are plenty of examples of ML models gone wrong.

Many companies we spoke with are keenly aware of these issues, and they use risk management techniques to help mitigate (not solve) them.

BlacklistsMany companies maintain blacklists for words, phrases, groups, or organizations that might be considered bad actors.

If you start typing “What the fu” into Google search, the autocomplete results might be: “What The Funk!.Band,” “what the fudge,” and “what the future.

” No four-letter words in this ML model.

Google will not complete words or phrases deemed inappropriate as part of its company autocomplete policies.

Another example is when Pinterest made news by blacklisting anti-vaccination related posts from its search.

Bias in machine learningThe process of data collection and interpretation is often biased.

Many companies train their initial models with internal data.

This approach is convenient, but it presents problems.

For instance, in expense reporting, receipts collected by people at your own company may not represent receipts that your customers collect all across the world.

This data collection bias will decrease your model’s ability to generalize and create a bias towards companies similar to yours.

Excluding variables associated with bias is not the answer.

Many product teams assume that it is best to exclude race, gender, or other variables from their models.

Amazon tried this with its automated resume screening ML model, leading to the algorithm to be biased against women.

In many data-sets, there are other proxies for race or gender.

“With sufficiently rich data, class memberships will be unavoidably encoded across other features.

” — Prof.

Moritz Hardt of UC BerkeleyThere is no silver bullet to dealing with this type of bias.

Start by including features like class membership (gender, ethnicity, etc.

) in your model to help you measure the bias that does exist.

Once the bias is measured, your solution may depend on the use case.

It may make sense to use mitigation techniques when retraining the model, counteract the bias after training, or even to change your user interface.

Key pointsConsider the risks of your ML use case and come up with a policy to address them.

Blacklists can help act as guardrails for ML models.

Be aware of what types of bias might impact your model.

To mitigate biases in your models, allow your model to measure the biases and then take steps to counteract the biases.

.

. More details

Leave a Reply