Predicting NBA Rookie Stats with Machine Learning

Predicting NBA Rookie Stats with Machine LearningSiddhesvar KannanBlockedUnblockFollowFollowingJun 29The 2019 NBA Draft featured many big names such as (from left to right): Jarrett Culver, Zion Williamson, and Ja MorantEvery year, millions of basketball fans from around the world tune in to the NBA Draft with the hope that their favorite team strikes gold and discovers the next big NBA star.

The people in the front offices of these NBA teams spend thousands of hours scouting and evaluating college and international talent trying to find players that can succeed at the pro level and contribute to the team.

Following the growth of the field of data science, it makes sense to try and evaluate talent beyond traditional methods.

This article documents a project that attempted to do just that by predicting the stat-lines for the newest batch of NBA rookies.

Data PreparationThe overall objective of this project was to predict how certain players would do in their first year in the NBA in terms of points, assists, rebounds, steals, and blocks, and the first step to achieving that was to create the right dataset.

There are a lot of variables that contribute to the success of an NBA player, but for this project I decided to focus on how well these various players performed at the college level.

In order to create this dataset, BeautifulSoup was used to scrape the NBA rookie stats of players drafted between 2000 and 2018 from www.

basketball-reference.

com.

After that, the average college stats of all of those drafted players were scraped from www.

sports-reference.

com/cbb and everything was nicely formatted into a Pandas data-frame on python.

All of the datasets that were created for the purposes of this project are now available here as a collection of .

csv files.

AnalysisBefore jumping in to the Machine Learning models, it is good to first go over the dataset and look out for any basic/interesting patterns and anomalies.

Statistical TrendsThe evolutions of college basketball and professional basketball were visualized by creating box plots of various statistics in regards to different years.

Box plot distribution of the average points scored in college of players drafted by Year.

This years draft class (represented by the box plot for the year 2020) doesn’t stand out significantly in any statistical category.

That should translate into this year’s draft class being a very typical draft class in the sense that it will follow the pattern, set by previous years, of there being a few superstars, and a plethora of average to below average role players.

Box plot distribution of the average 3 pointers attempted by NBA Rookies by YearThe NBA Rookie box plot diagrams proved to be a lot more interesting though with a lot more significant trends and patterns sticking out.

The most fascinating pattern here regards the evolution of the 3 point shot and how it’s becoming more and more popular in recent years.

What’s just as interesting as the uptick in average 3-point attempts in recent years is the recency of the pivot of this 3-point shooting trend.

Before 2010, it doesn’t look like any rookie class averaged 1 3-point per game, whereas after 2010, almost every rookie class exceeded that stat.

ClustersBesides analyzing the data from the perspective of looking for historic statistic trends, the data was analyzed from a cluster analysis perspective with two main objectives.

The generated clusters put into perspective how players in this draft class stack up against each other and how players in this draft class stack up rookies from previous years.

Agglomerative Clustering on college stats of 2019 draft classThree different clustering algorithms (K-Means clustering, Agglomerative clustering, and Affinity Propagation ) were run on the data set of the college stats of this year’s draft class.

Zion Williamson is a certain player that has received a lot of hype as the next big superstar from the sports media world, and all the clustering algorithms run compare his college performance to that of Brandon Clarke and Bol Bol.

Affinity Propagation Clustering on college stats of 2000-2019 draft classThe same three clustering algorithms were run on the dataset comprised of the college stats of players in the last 20 NBA Draft classes, and some interesting results were obtained.

The Affinity Propagation model here describes Zion as a hybrid of Blake Griffin and Deandre Ayton, and it correspondingly estimates that he will put up an impressive stat line of 19.

4 Points, 11.

2 Rebounds, 2.

8 Assists, 0.

85 Steals, and 0.

7 Blocks per game.

Feature EngineeringThe three main steps of creating powerful machine learning models come from selecting/manipulating the input features, choosing the most successful algorithm, and fine tuning that algorithm’s hyper-parameters.

That is why, before running the data through all of the ML algorithms, some adjustments need to be made to the dataset.

Categorical variables, such as the name of the college and the name of the team that drafted the player, were originally broken down into a series of dummy variables uniquely representing each college/team.

This technique was ultimately unsuccessful though as the algorithms run on this modified dataset tended to yield lower metrics than algorithms run on the original dataset without the team or college.

The team variable wasn’t very strong because teams fundamentally change from year to year.

For example, it doesn’t seem quite right to equate the 2010 Cleveland Cavaliers team that won 74% of their games to the 2011 Cleveland Cavaliers team that ended up winning just 23% of their games.

That is why the team variable was replaced with some metadata features regarding the success of said team the year before the player got drafted (ex.

Wins, Point differentials per game, etc).

This feature expansion was validated to a degree by the results, as the algorithms run on the modified dataset yielded better metrics than the algorithms run on the raw dataset with just team.

Correlation matrix between various featuresBesides experimentation with dummy variables, a correlation matrix was constructed to better understand the strength of the relationships between the input variables and the target variables.

For example, as seen in the diagram above, there seems to be a strong correlation between field goals attempted per game in college and actual points scored per game in the NBA.

Recurrent feature elimination was also used to determine the best variable subset to consider.

This method worked by repetitively retrieving feature importances from a linear regression model and removing the feature with the lowest importance.

Upon experimentation, it was found that reducing the input variables from 37 to 30 using RFE produced the best results.

ModelsA lot of different algorithms were run throughout the process of this experiment, and the raw code for all of the algorithms described below can be found here.

Linear RegressionLinear Regression via the Method of Least Squares in 2 dimensionsBefore jumping into all the fancy algorithms, a basic regression model was run to set some baseline benchmarks.

Linear regression was selected as this benchmark model, and this algorithm works by attempting to draw a straight line through all of the points provided in the train set in N dimensions (where N is the number of features in the dataset).

The equation for this line is calculated by following the method of least squares where the objective is to minimize the sum of the square of the errors.

Random ForrestA simple Decision Tree exampleThe first major algorithm used was the Random Forrest regressor, and this algorithm works by randomly extracting various subsets from the original training dataset through a process of picking out the data that lies in the intersection of N random input features and M random columns.

Next, the basic decision tree algorithm illustrated above is run on all of these different subsets.

Once all the trees are created, the prediction of an element in the test set is calculated by taking the mean of the results produced by running the input features through each and every decision tree.

Extra TreesThe process of creating multiple decision trees to make a predictionThe second algorithm run was the Extra Trees regressor, and this algorithm acts in a very similar manner to the Random Forrest regressor.

Just like Random Forrest, Extra Trees runs a decision tree algorithm on various random subsets generated from the training dataset to create predictions.

The big difference between these two algorithms comes from the way the decision tree is run on the subsets.

The Random Forest algorithm uses the traditional decision tree approach where the feature and the value used at a split point is determined based on information gained at that step.

The Extra Trees algorithm uses a more lenient decision tree approach where the feature and the value used at a split point are chosen randomly.

XGBoostProcess behind the Gradient Boosting techniqueThe next algorithm run was XGBoost, and this algorithm uses a technique known as gradient boosting to create a powerful and accurate model.

Gradient boosting works by recursively building different models on top of each other to minimize the error value.

Since the whole objective of XGBoost is to minimize the error found on the training set, this algorithm has an occasional tendency to overfit the data and perform subpar on the testing set.

Neural NetsExample of a simple feed forward Neural NetworkNext up came the challenge of designing effective and appropriate Neural Networks to understand the data provided.

Neural Networks work quite differently than all of the algorithms mentioned above, but the core makeup of a Neural Network can be described as a series of layers made up of nodes that connect to the nodes of the next layer via weights and activation functions.

More specifically, the values in a node at some hidden layer N are defined by the values inside of the nodes at the previous layer put in a linear combination with initially randomized weights and run through some activation function.

The algorithm behind the Neural Network tries to continuously modify these initially random weights with the goal of producing outputs close to the the provided target outputs.

TPOTThe automated pipeline process behind TPOTThe final algorithm that was run was TPOT, and this algorithm is intrinsically quite a bit different than the aforementioned algorithms in the sense that it is really a tool used to find good algorithms and models.

The essence of it is that it uses genetic programming to continuously eliminate models with poor results so that the the most successful model is returned.

ResultsAt the end of the day, machine learning is really a result driven game where models that produce higher metrics are significantly more valuable than models that don't.

For the purposes of this project, there were 5 main metrics used to compare the success of the different models built.

Adjusted Testing r² : This statistic measures the adjusted r² value on the testing set.

The value of this statistic ranges from -inf to 1 with higher values indicating better results.

Cross Validation Score: This statistic is derived by multiplying 100 and the average of the raw r²s produced by running the algorithms on different train-test splits within the dataset.

The value of this statistic ranges from -inf to 100 with higher values indicating better results.

Percent Very Accurate : A prediction is considered as “Very Accurate” if the prediction is within 20% of the actual result.

This statistic looks at what percent of the testing set was labeled as “Very Accurate”.

Percent Accurate : A prediction is considered as “Accurate” if the prediction lies between 20% and 50% away from the actual result.

This statistic looks at what percent of the testing set was labeled as “Accurate”.

Point Differential Error : This statistic looks at what percent of the predictions in the testing set lied fewer than 2 points away from the actual results.

Result metrics from the different algorithmsA basic website was created to display the results from the table above in a more in-depth and interactive manner.

PredictionsThe final step in this project was to complete the initial objective by using the above insights to predict the stats of the upcoming rookie class.

Out of all the prospective algorithms, the Random Forrest regressor was selected to analyze college data of the incoming rookie class and predict what stats these players will put up once they enter the NBA.

Random Forrest was selected because it had the highest Point Differential Error score, and it had the second highest CV score.

Extra Trees arguably was the better algorithm because it had a higher CV score and a higher adjusted r² than Random Forrest, but upon closer inspections, Extra Trees didn’t perform as well as Random Forrest when it came to analyzing “elite” players.

Defining “elite” players as players in the test set who averaged more than 10 points per game their rookie year, Random Forrest very accurately identified 5 out of 14 players and accurately identified 2 out of 14 players.

Extra Trees, on the other hand, only managed to very accurately identify 3 out of 14 players and accurately identify 2 out of 14 players.

The tables below show the incoming rookies sorted by where they were picked on the left, and the incoming rookies with their predicted stats sorted by overall value on the right.

Overall value is the predicted fantasy point output per game in accordance with these guidelines.

The tables were also broken up by position to allow the reader to easily understand how a player stacks up against other similar players in this particular draft class.

Point Guard:Shooting Guard:Small Forward:Power Forward:Center:The full table of results generated can be found here.

A simple website was created to display the results from the tables above in a more interactive manner.

This module displays the predicted stats of individual prospective players.

The full website is available here.

Future Work“It is the innate nature of a data scientist to never truly be fully satisfied with one’s work”.

 — Neelabh Pant PhDThis project has a lot of potential in regards to assisting NBA scouts find players in the NCAA that can thrive and succeed at the next level.

With that said though, this project is far from truly being over.

All of the models above can still be further tuned and experimented with to provide better and more accurate results.

For more information about this project, you can refer to my GitHub, and if you have any comments or questions, feel free to post below or send me an email at siddhesvark@gmail.

com.

.

. More details

Leave a Reply