A gentle Introduction to Customer Segmentation

Well, in this case you can use their behavior (specifically, which offer they did or did not go for) as a way to grouping them up with similar minded customers.

You can now look into each of those groups to unearth trends and patterns and use them for shaping future offers.

On a slightly more technical note, it’s important to mention that there are many K-Means algorithms available ( Hartigan-Wong, Lloyd, MacQueen to name but a few) but they all share the same basic concept: each element is assigned to a cluster so that it minimises the sum of squares Euclidean Distance to the centre – a process also referred to as minimising the total within-cluster sum of squares (tot.

withinss).

Loading the Packageslibrary(tidyverse)library(lubridate)library(knitr)library(readxl)library(broom)library(umap)library(ggrepel)The DataThe dataset comes from John Foreman’s book, Data Smart.

It contains sales promotion data for a fictional wine retailer and includes details of 32 promotions (including wine variety, minimum purchase quantity, percentage discount, and country of origin) and a list of 100 customers and the promotions they responded to.

offers_tbl <- read_excel('.

/00_data/WineKMC.

xlsx', sheet = 'OfferInformation')offers_tbl <- offers_tbl %>% set_names(c('offer', 'campaign', 'varietal', 'min_qty_kg', 'disc_pct','origin','past_peak') )head(offers_tbl)## # A tibble: 6 x 7## offer campaign varietal min_qty_kg disc_pct origin past_peak## <dbl> <chr> <chr> <dbl> <dbl> <chr> <chr> ## 1 1 January Malbec 72 56 France FALSE ## 2 2 January Pinot Noir 72 17 France FALSE ## 3 3 February Espumante 144 32 Oregon TRUE ## 4 4 February Champagne 72 48 France TRUE ## 5 5 February Cabernet~ 144 44 New Zeala~ TRUE ## 6 6 March Prosecco 144 86 Chile FALSEtransac_tbl <- read_excel('.

/00_data/WineKMC.

xlsx', sheet = 'Transactions')transac_tbl <- transac_tbl %>% set_names(c('customer', 'offer') )head(transac_tbl)## # A tibble: 6 x 2## customer offer## <chr> <dbl>## 1 Smith 2## 2 Smith 24## 3 Johnson 17## 4 Johnson 24## 5 Johnson 26## 6 Williams 18The data needs to be converted to a User-Item format(a.

k.

a.

Customer-Product matrix), featuring customers across the top and offers down the side.

The cells are populated with 0’s and 1’s, where 1’s indicate if a customer responded to a specific offer.

This type of matrix is also known as binary rating matrix and does NOT require normalisation.

wine_tbl <- transac_tbl %>% left_join(offers_tbl) %>% mutate(value = 1) %>% spread(customer,value, fill = 0) head(wine_tbl)## # A tibble: 6 x 107## offer campaign varietal min_qty_kg disc_pct origin past_peak ## <dbl> <chr> <chr> <dbl> <dbl> <chr> <chr> ## 1 1 January Malbec 72 56 France FALSE ## 2 2 January Pinot Noir 72 17 France FALSE ## 3 3 February Espumante 144 32 Oregon TRUE ## 4 4 February Champagne 72 48 France TRUE ## 5 5 February Cabernern~ 144 44 New Zea~ TRUE ## 6 6 March Prosecco 144 86 Chile FALSE ## # .

with 100 more variables: Adams <dbl>, Allen <dbl>, Anderson <dbl>, Bailey <dbl>, Baker <dbl>, Barnes <dbl>, .

Clustering the CustomersThe K-Means algorithm comes with the stats package, one of the Core System Libraries in R, and is fairly straightforward to use.

I just need to pass a few parameters to the kmeans() function.

user_item_tbl <- wine_tbl[,8:107]set.

seed(196) kmeans_obj <- user_item_tbl %>% kmeans(centers = 5, # number of clusters nstart = 100, # number of random sets to be chosen iter.

max = 50) # max number of iterations allowedI can quickly inspect the model withglance()from the broompackage, which provides an summary of model-level statisticsglance(kmeans_obj) %>% glimpse()## Observations: 1## Variables: 4## $ totss <dbl> 283.

1875## $ tot.

withinss <dbl> 189.

7255## $ betweenss <dbl> 93.

46201## $ iter <int> 3The one metric to really keep an eye on is the total within-cluster sum of squares (or tot.

withinss) as the optimal number of clusters is that which minimises the tot.

withinss.

So I want to fit the k-means model for different number of clusters and see where tot.

withinss reaches its minimum.

First, I build a function for a set number of centers (4 in this case) and check that is working on glance().

kmeans_map <- function(centers = 4) { user_item_tbl %>% kmeans(centers = centers, nstart = 100, iter.

max = 50)}4 %>% kmeans_map() %>% glance()## # A tibble: 1 x 4## totss tot.

withinss betweenss iter## <dbl> <dbl> <dbl> <int>## 1 283.

203.

80.

0 2Then, I create a nested tibble, which is a way of “nesting” columns inside a data frame.

The great thing about nested data frames is that you can put essentially anything you want in them: lists, models, data frames, plots, etc!kmeans_map_tbl <- tibble(centers = 1:15) %>% # create column with centres mutate(k_means = centers %>% map(kmeans_map)) %>% # iterate `kmeans_map` row-wise to # gather kmeans models for each centre mutate(glance = k_means %>% map(glance)) # apply `glance()` row-wise to gather # each model’s summary metricskmeans_map_tbl %>% glimpse()## Observations: 15## Variables: 3## $ centers <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, .

## $ k_means <list> [<1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, .

## $ glance <list> [<tbl_df[1 x 4]>, <tbl_df[1 x 4]>, .

Last, I can build a scree plot and look for the “elbow” on the graph, the point where the number of additional clusters seem to level off.

In this case 5 appears to be an optimal number as the drop in tot.

withinss for 6 is not as pronounced as the previous one.

kmeans_map_tbl %>% unnest(glance) %>% # unnest the glance column select(centers, tot.

withinss) %>% # select centers and tot.

withinss ggplot(aes(x = centers, y = tot.

withinss)) + geom_line(colour = 'grey30', size = .

8) + geom_point(colour = 'green4', size = 3) + geom_label_repel(aes(label = centers), colour = 'grey30') + theme_light() + labs(title = 'Scree Plot')Visualising the SegmentsNow that I have identified the optimal number of clusters I want to visualise them.

To do so, I use the UMAP ( Uniform Manifold Approximation and Projection ), a dimentionality reduction technique that can be used for cluster visualisation in a similar way to Principal Component Analysis and t-SNE.

First, I create a umap object and pull out the layout argument (containing coordinates that can be used to visualize the dataset), change its format to a tibble and attach the offer column from the wine_tbl.

umap_obj <- user_item_tbl %>% umap() umap_tbl <- umap_obj$layout %>% as_tibble() %>% # change to a tibble set_names(c('x', 'y')) %>% # remane columns bind_cols(wine_tbl %>% select(offer)) # attach offer referenceThen, I pluck the 5th kmeans model from the nested tibble, attach the cluster argument from the kmeans function to the output, and join offer and cluster to the umap_tbl.

umap_kmeans_5_tbl <- kmeans_map_tbl %>% pull(k_means) %>% pluck(5) %>% # pluck element 5 broom::augment(wine_tbl) %>% # attach .

cluster to the tibble select(offer, .

cluster) %>% left_join(umap_tbl, by = 'offer') # join umap_tbl by offerAt last, I can visualise the UMAP’ed projections of the clusters.

plotly adds some nice interactivity, which brings the chart to life!umap_kmeans_5_tbl %>% mutate(label_text = str_glue('Offer: {offer} Cluster: {.

cluster}')) %>% ggplot(aes(x,y, colour = .

cluster)) + geom_point() + geom_label_repel(aes(label = label_text), size = 3) + theme_light() + labs(title = 'UMAP 2D Projections of K-Means Clusters', caption = "") + theme(legend.

position = 'none')Evaluating the ClustersNow we can finally have a closer look at the single clusters to see what K-Means has identified.

But let’s first bring all information together in one data frame.

cluster_trends_tbl <- wine_tbl %>% left_join(umap_kmeans_5_tbl) %>% arrange(.

cluster) %>% select(.

cluster, offer:past_peak)Cluster 1 & 2Customers in cluster 1 purchase high volumes of sparkling wines (Champagne and Prosecco) whilts those in the second segment favour low volume purchases of different varieties.

cluster_trends_tbl %>% filter(.

cluster ==1 | .

cluster ==2) %>% count(.

cluster, varietal, origin, min_qty_kg, disc_pct) %>% select(-n)## # A tibble: 9 x 5## .

cluster varietal origin min_qty_kg disc_pct## <fct> <chr> <chr> <dbl> <dbl>## 1 1 Champagne France 72 48## 2 1 Champagne New Zealand 72 88## 3 1 Prosecco Chile 144 86## 4 2 Espumante Oregon 6 50## 5 2 Espumante South Africa 6 45## 6 2 Malbec France 6 54## 7 2 Merlot Chile 6 43## 8 2 Pinot Grigio France 6 87## 9 2 Prosecco Australia 6 40Cluster 3 & 4Customers in these groups have very specific taste when it comes to wine: those in the third segment have a penchant for Pinot Noir, whereas group 4 only buys French Champagne in high volumes.

cluster_trends_tbl %>% filter(.

cluster ==3 | .

cluster ==4 ) %>% group_by() %>% count(.

cluster, varietal, origin, min_qty_kg, disc_pct) %>% select(-n)## # A tibble: 6 x 5## .

cluster varietal origin min_qty_kg disc_pct## <fct> <chr> <chr> <dbl> <dbl>## 1 3 Pinot Noir Australia 144 83## 2 3 Pinot Noir France 72 17## 3 3 Pinot Noir Germany 12 47## 4 3 Pinot Noir Italy 6 34## 5 4 Champagne France 72 63## 6 4 Champagne France 72 89Cluster 5The fifth segment is a little more difficult to categorise as it encompasses many different attributes.

The only clear trend is that customers in this segment picked up all available Cabernet Sauvignon offers.

cluster_trends_tbl %>% filter(.

cluster ==5 ) %>% count(.

cluster, varietal, origin, min_qty_kg, disc_pct) %>% select(-n)## # A tibble: 17 x 5## .

cluster varietal origin min_qty_kg disc_pct## <fct> <chr> <chr> <dbl> <dbl>## 1 5 Cabernet Sauvignon France 12 56## 2 5 Cabernet Sauvignon Germany 72 45## 3 5 Cabernet Sauvignon Italy 72 82## 4 5 Cabernet Sauvignon Italy 144 19## 5 5 Cabernet Sauvignon New Zealand 144 44## 6 5 Cabernet Sauvignon Oregon 72 59## 7 5 Champagne California 12 50## 8 5 Champagne France 72 85## 9 5 Champagne Germany 12 66## 10 5 Chardonnay Chile 144 57## 11 5 Chardonnay South Africa 144 39## 12 5 Espumante Oregon 144 32## 13 5 Malbec France 72 56## 14 5 Merlot California 72 88## 15 5 Merlot Chile 72 64## 16 5 Prosecco Australia 72 83## 17 5 Prosecco California 72 52Final thoughtsAlthough it’s not going to give you all the answers, clustering is a powerful exploratory exercise that can help you reveal patterns in your consumer base, especially when you have a brand new market to explore and do not have any prior knowledge of it.

It’s very easy to implement and even on a small dataset like the one I used here, you can unearth interesting patterns of behaviour in your customer base.

With little effort we have leaned that some of our customers favour certain varieties of wine whereas others prefer to buy high or low quantities.

Such information can be used to tailor your pricing strategies and marketing campaings towards those customers that are more inclined to respond.

Moreover, customer segmentation allows for a more efficient allocation of marketing resources and the maximization of cross- and up-selling opportunities.

Segmentation can also be enriched by overlaying information such as a customers’ demographics (age, race, religion, gender, family size, ethnicity, income, education level), geography (where they live and work), and psychographic (social class, lifestyle and personality characteristics) but these go beyond the scope of this mini-project.

Code RepositoryThe full R code can be found on my GitHub profileReferencesFor a broader view on the potential of Customer SegmentationFor a critique of some of k-means drawbackCustomer Segmentation at Bain & Company.

. More details

Leave a Reply