How to bring your Data Science Project in production

How to bring your Data Science Project in productionUsing Azure Databricks with Spark, Azure Machine Learning Service and Azure DevOpsRené BremerBlockedUnblockFollowFollowingJan 201.

IntroductionA lot of companies struggle to bring their data science projects into production.

A common issue is that the closer the model is to production, the harder it is to answer the following question:Why did the model predict this?Explanability is essential for the trust in a model and prevents situations in which nobody understands why a prediction is made (e.

g.

why is this mortgage rejected for this person?).

Explanability of a model is also required by auditors in a lot of industries (e.

g.

finance).

Having a build/release pipeline for these data science project can help to answer this question and build trust in model.

It enables you to trace back that version N of model was deployed into production on time T and was trained on dataset D with Algorithm A by person P.

2.

ObjectiveIn this tutorial, a build/release pipeline for a machine learning project is created.

In this:An HTTP endpoint is created that predicts if the income of a person is higher or lower than 50k per year using features as age, hours of week working, education.

Azure Databricks with Spark, Azure Machine Learning service and Azure DevOps will be used as tooling.

This tutorial is based on of the following tutorialEnable CI/CD for Data Science ProjectsThis tutorial will show you how can you create an end to end CI/CD project for data science.

medium.

comAnd extends on the following:Use SparkML instead of Scikit-learn as Machine Learning LibrariesUse Azure Databricks as Remote ComputeIn the remainder of this blog, the following steps will be executed3.

Prerequisites4.

Create machine learning model in Azure Databricks5.

Manage model in Azure Machine Learning Service6.

Build and release model in Azure DevOps7.

Conclusion3.

PrerequisitesThe following is required in this tutorial:IDE , in this case Visual Studio Code is usedAzure account (needed in part 4,5 and 6)Azure Databricks (needed in part 4,5 and 6)Azure Machine Learning Service (needed in part 5 and 6)Azure DevOps Account (needed in part 6)Git (needed in part 6)Python (optional, only needed for debugging code locally)4.

Create machine learning model in Azure DatabricksAzure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform.

It can be used for many analytical workloads, amongst others Machine Learning.

In this step, the following is done in Azure Databricks:4a.

Create new cluster4b.

Import notebook4c.

Run notebook4a.

Create new clusterStart your Azure Databricks workspace and go to Cluster.

Create a new cluster with the following settings:4a1.

Create new cluster4b.

Import notebookGo to your Azure Databricks workspace, right-click and then select import.

In the radio button, select to import the following notebook using URL:https://raw.

githubusercontent.

com/rebremer/devopsai_databricks/master/project/modelling/1_IncomeNotebookExploration.

pySee also picture below:4b1.

Import notebook4c.

Run notebookSelect the notebook you imported in 2b and attach the notebook to the cluster you created in 2a.

Make sure that the cluster is running and otherwise start it.

Read the steps in the notebook, in which the data is explored and several settings and algorithms are tried to create a model that predicts the income class of a person.

In case you want to run the entire notebook, select run all.

5.

Manage model in Azure Machine Learning ServiceAzure Machine Learning Service is a cloud service that you use to train, deploy, automate, and manage machine learning models.

In this context, the model that was created in step 1 will be added to your AMLS instance.

The following steps will be executed5a.

Look up tenant Id5b.

Create Service Principal5c.

Attach service principal to Azure Machine Learning Service5d.

Import new notebook to Azure Databricks5e.

Review results in Azure Machine Learning service5a.

Look up your tenand idGo to the Azure Portal and click on Azure Active Directory as shown below:5a1.

Copy tenant id5b.

Create Service PrincipalClick on Azure Active Directory as shown below:5b1.

Register appSelect App Registrations and create a new Service Principal.

After this is done create a new key which will be used in this script to connect to azure ml service.

5b2.

Create keySelect the keys tab and copy the key which is generated.

5b3.

Copy keyClose this tab, copy the application id from this page , this will be used in the Databricks notebook and later Azure Devops project.

5b4.

Copy application Id5c.

Attach service principal to Azure Machine Learning ServiceYou need to provide contributor rights for this Service Principal on the resource group which has the aml service.

Therefore, go to the resource group in the Azure Portal in which you created your Azure Machine Learning Service.

5c1.

Resource group with Azure Machine Learning ServiceClick the IAM tab and add the service principal created in the previous step and give contributor rights on this resource group.

5c2.

Add contributer rights for SP to AMLS instance5d.

Import notebook with AMLS attached to Azure DatabricksIn the prevous part of this tutorial, a model was created in Azure Databricks.

In this part you are going to add the created model to Azure Machine Learning Service.

Go to your Databricks Workspace again, right click, select import and import the a notebook using the following URL:https://raw.

githubusercontent.

com/rebremer/devopsai_databricks/master/project/modelling/2_IncomeNotebookAMLS.

pyAgain, make sure it is attached to a cluster and the cluster is running5d1.

Import AMLS notebookReplace the variables with the values generated in the previous steps just as variables.

5d2.

Add variables notebooktenant_id="<Enter Your Tenant Id>"app_id="<Application Id of the SPN you Create>"app_key= "<Key for the SPN>"workspace="<Name of your workspace>"subscription_id="<Subscription id>"resource_grp="<Name of your resource group where aml service is"Notice that in a production situation, keys must never be added to a notebook and a secret scope back by a key vault shall be used (see here), but is out of scope for this tutorial and will be dealt with in a next version of this tutorial.

Now run the notebook (either by clicking on Run All buttor or cell by cell by using Shift+Enter5e.

Review results in Azure Machine Learning serviceIn step 5d, a notebook was run in which the results were written to Azure Machine Learning Service.

In this, the following was done:A new experiment was created in you Azure Machine Learning ServiceWith in this experiment, a root run with 6 child runs were the different attempts can be found.

A childrun contains a description of the model (e.

g.

Logistic Regression with regularization 0) and the most important logging of the attempt (e.

g.

accuracy, number of false postives)The model artificact (.

mml) is also part of a childrun.

The artifact of the best childrun can be taken and deployed into production.

Go to you Azure Machine Learning Service instance.

Select the experiment name that was used in the notebook (e.

g.

experiment_model_int).

5e1.

Find experiment in Azure Machine Learning ServiceNow click on the experiment, click on the run and childrun you want to see ad find the metrics.

5e2.

Find metrics of a childrun in Azure Machine Learning ServiceWhen you go to output, you will find the model artifact, which you can also download.

The model artifact of the best run will be used as the basis of the containter that is deployed using Azure DevOps in the next part of this tutorial.

5e3.

Model artifact6.

Build and release model in Azure DevOpsAzure DevOps is the tool to continuously build, test, and deploy your code to any platform and cloud.

In this project, Azure DevOps will be used to deploy the project.

The following needs to be done:6a.

Create Personal Access Token in Databricks6b.

Create Azure DevOps project and add repository6c.

Clone reposity to your local pc6d.

Add variables to code6e.

Create build pipeline6f.

Create release pipeline that creates an HTTP endpoint6a.

Create Personal Access Token in DatabricksTo run Notebooks in Azure Databricks triggered from Azure DevOps (using REST APIs), a Databrics Access Token (PAT) is required for authentication.

Go to Azure Databricks and click to the person icon in the upper right corner.

Select User Settings and then generate a new token.

6a1.

Generate Databricks Access TokenMake sure to copy the token now.

You won’t be able to see it again.

Token is needed to access Databricks from the Azure DevOps build pipeline later6b.

Create Azure DevOps project and add repositoryCreate a new project in Azure DevOps by following the tutorial below:Create a project – Azure DevOpsCreate a project where developers and teams can plan, track progress, and collaborate on building software solutions.

docs.

microsoft.

comClick on the repository folder and select to import the following repository:https://github.

com/rebremer/devopsai_databricks.

gitSee also picture below:6b1.

Add repository to your Azure DevOps project6c.

Clone reposity to your local pcSelect again repository and choose to clone the repository by clicking the button clone in the upper right corner.

6c1.

Clone repositoryCreate a directory on your local pc.

Use the following git commands to clone the repository:git clone <<your Azure devops repository, something like https://rebremer.

visualstudio.

com/devopsai%20databricks%20final/_git/devopsai%20databricks%20final>>You can also decide to directly clone the repository in Visual Studio Code, the tool that is used to modify the code in the next steps.

6d.

Add variables to codeOpen the project in the IDE of your choice, in this tutorial Visual Studio Code is used.

Select the option to open an entire folder.

Then the following files shall be changed:projectdataprepprepDatabricks.

pyprojectdeploydeploy.

pyprojectdeploy est.

pyprojectservices riggerDatabricks.

pyWith same variables that were also added to the notebook in step 5d.

Import notebook with AMLS attached to Azure Databricks.

tenant_id="<Enter Your Tenant Id>"app_id="<Application Id of the SPN you Create>"app_key= "<Key for the SPN>"workspace="<Name of your workspace>"subscription_id="<Subscription id>"resource_grp="<Name of your resource group where aml service is"domain = "westeurope.

azuredatabricks.

net" # change location in case databricks instance is not in westeuropeDBR_PAT_TOKEN = bytes("<<your Databricks Personal Access Token>>", encoding='utf-8') # adding b'Also here, notice that in a production situation, keys must never be added to a code.

Instead, secret variables in an Azure DevOps pipeline shall be used (see here), but is out of scope for this tutorial and will be dealt with in a next version of this tutorial.

After you added the variables to the code, the python files shall be committed back to your repository.

This can be done directly in Visual Studio Code by selecting the terminal as follows:6d1.

Open terminal in Visual Studio Code for git commandsAnd then execute the following git commands:git statusgit add .

git commit -m "changed variables"git pushFinally, if you want to run and debug the code locally, you have to have python installed.

The following library need to be installed (can also be done using the same terminal):pip install –upgrade azureml-sdk[notebooks,automl]6e.

Create build pipelineIn this step, you are going to create a build pipeline.

Go to https://visualstudio.

microsoft.

com/ and then click on the project which you have created.

Click on Pipelines.

6e1.

View pipelinesClick builds and then select new build pipeline:6e2.

Create new pipelineAs repository, select the repository you have just created.

As a template, select an empty template, see below6e3.

Select empty templateGive a name to your build pipeline.

6e4.

Name build pipelineClick on the Agent Job and change the name to something meaningful and also in this demo, choose the agent pool to be Hosted VS2017.

6e5.

Agent jobSearch for Create Conda in the search box and click on add.

6e6.

Add Conda Environment to build pipelineClick on the task which was added and change the display name and also check Create a custom environment and name it.

Also specify python version you want to use in this case 3.

6.

6e7.

Modify Conda environmentNext click on the + again on the agent to add a new task and this time search for a bash script .

This script is going to install dependencies required by our data preparation and training scripts.

6e8.

Add bash script to build pipelineSelect the script to be executed and also under advanced set the directory to the setup folder.

6e9.

Modify bashAdd next task , again select + and then search for python script task.

This task adds the Notebook to Azure Databricks and can also be used to upload data to Azure Databricks DBFS or Azure Data Lake Store (ADLS), not done in this tutorial.

6e10.

Add prep Databricks scriptUnder the script path select the script under project/dataprep/prepDatabicks.

py and then also set the working directory under project/dataprep as advanced option.

6e11.

Script to copy notebook to Azure DatabricksAdd a new python task and then select the script under project/dataprep/prepDatabicks.

py and then also set the working directory under project/dataprep as advanced option.

6e12.

Script to run notebook to Azure DatabricksThis script triggers the model training in Azure Databricks and when it has been successfully executed then stores the model artifact into Azure ML Service and also the best model is registered in Azure ML service.

After this is done add the task to copy the artifacts .

This step packages our artifacts which can be used for deployment.

Add a new task and then search for copy files.

6e13.

Copy files as artifactAdd this task and then set the properties as shown.

Source Folder as : $(Build.

SourcesDirectory)Target Folder as: $(Build.

ArtifactStagingDirectory)6e14.

Publish artifactNext click on save an queue to trigger this build pipeline.

6e15.

Save and queueOnce this is done you should be able to see the build job and click on builds to see the list of builds running and completed.

6e16.

Build historyYou can also enable Continuous Integration by going back to the build definition , editing it and under triggers check the continuous integration tick box.

6e17.

Continuous Build history6f.

Create release pipeline that creates an HTTP endpointThe next part is about creating a release pipeline.

In this, it deploys the Machine Learning Model as a Container on a Azure Container Instance.

Click on the Releases tab and then on new release pipeline6f1.

Create new Release pipelineSelect an empty job and click next6f2.

Select template as release pipelineEnter a descriptive stage name.

6f3.

Add stage to release pipelineAdd an artifact which you would like to release in this pipeline.

In this case select the artifact created from our previous build pipeline.

6f4.

Add Artifact to release pipelineNow click on Tasks.

Add a conda env, just like you did before in the build.

6f5.

Add Conda environment to release pipelineNext step is to add dependencies just like step 2 of build pipe.

Make sure to set the script to be triggered and also the working directory under advanced.

6f6.

Add Dependencies to release pipelineNext Step is to add a new python script task.

Point the script path to a file named deploy.

py and also set the working directory in advanced options.

6f6.

Add deploy script to release pipelineThe previous step deploys the model on an Azure Container Instance and then you need to test if the endpoint responds to a test call.

That is what you do in the last step.

6f6.

Add deploy script to release pipelineSave and close the release pipeline , now release the pipeline.

After a successful run check the logs to see the response of our newly created web service in Azure DevOps.

You can also view the experiment, model and images in Azure Machine Learning Service like was done in part 5.

Manage model in Azure Machine Learning Service.

Notice that expirement name “experiment_model_release” was used in here.

Finally, you can view the endpoint as Azure Container Instance in the portal.

Go to your resource group and find the Azure Container Instance.

6f7.

Azure Container Instance in Resource GroupIt has a public IP address assigned, such that you can also call to endpoint using a tool like Postmen (payload for calling the endpoint can be found in deploy/test.

py that was used to test the web service).

7.

ConclusionIn this tutorial, an end to end pipeline for a machine learning project was created.

In this:Azure Databricks with Spark was used to explore the data and create the machine learning models.

Azure Machine Learning Service was used to log the models and its metrics.

Azure Devops was used to build an image of the best model and to release it as an endpoint.

This way you can orchestrate and monitor the entire pipeline from idea to the moment that the model is brought into production.

This enables you to answer to question: Why did the model predict this?.

. More details

Leave a Reply