How to bring your Data Science Project in production

How to bring your Data Science Project in productionUsing Azure Databricks with Spark, Azure Machine Learning Service and Azure DevOpsRené BremerBlockedUnblockFollowFollowingJan 201.

IntroductionA lot of companies struggle to bring their data science projects into production.

A common issue is that the closer the model is to production, the harder it is to answer the following question:Why did the model predict this?Explanability is essential for the trust in a model and prevents situations in which nobody understands why a prediction is made (e.


why is this mortgage rejected for this person?).

Explanability of a model is also required by auditors in a lot of industries (e.



Having a build/release pipeline for these data science project can help to answer this question and build trust in model.

It enables you to trace back that version N of model was deployed into production on time T and was trained on dataset D with Algorithm A by person P.


ObjectiveIn this tutorial, a build/release pipeline for a machine learning project is created.

In this:An HTTP endpoint is created that predicts if the income of a person is higher or lower than 50k per year using features as age, hours of week working, education.

Azure Databricks with Spark, Azure Machine Learning service and Azure DevOps will be used as tooling.

This tutorial is based on of the following tutorialEnable CI/CD for Data Science ProjectsThis tutorial will show you how can you create an end to end CI/CD project for data science.


comAnd extends on the following:Use SparkML instead of Scikit-learn as Machine Learning LibrariesUse Azure Databricks as Remote ComputeIn the remainder of this blog, the following steps will be executed3.


Create machine learning model in Azure Databricks5.

Manage model in Azure Machine Learning Service6.

Build and release model in Azure DevOps7.


PrerequisitesThe following is required in this tutorial:IDE , in this case Visual Studio Code is usedAzure account (needed in part 4,5 and 6)Azure Databricks (needed in part 4,5 and 6)Azure Machine Learning Service (needed in part 5 and 6)Azure DevOps Account (needed in part 6)Git (needed in part 6)Python (optional, only needed for debugging code locally)4.

Create machine learning model in Azure DatabricksAzure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform.

It can be used for many analytical workloads, amongst others Machine Learning.

In this step, the following is done in Azure Databricks:4a.

Create new cluster4b.

Import notebook4c.

Run notebook4a.

Create new clusterStart your Azure Databricks workspace and go to Cluster.

Create a new cluster with the following settings:4a1.

Create new cluster4b.

Import notebookGo to your Azure Databricks workspace, right-click and then select import.

In the radio button, select to import the following notebook using URL:https://raw.



pySee also picture below:4b1.

Import notebook4c.

Run notebookSelect the notebook you imported in 2b and attach the notebook to the cluster you created in 2a.

Make sure that the cluster is running and otherwise start it.

Read the steps in the notebook, in which the data is explored and several settings and algorithms are tried to create a model that predicts the income class of a person.

In case you want to run the entire notebook, select run all.


Manage model in Azure Machine Learning ServiceAzure Machine Learning Service is a cloud service that you use to train, deploy, automate, and manage machine learning models.

In this context, the model that was created in step 1 will be added to your AMLS instance.

The following steps will be executed5a.

Look up tenant Id5b.

Create Service Principal5c.

Attach service principal to Azure Machine Learning Service5d.

Import new notebook to Azure Databricks5e.

Review results in Azure Machine Learning service5a.

Look up your tenand idGo to the Azure Portal and click on Azure Active Directory as shown below:5a1.

Copy tenant id5b.

Create Service PrincipalClick on Azure Active Directory as shown below:5b1.

Register appSelect App Registrations and create a new Service Principal.

After this is done create a new key which will be used in this script to connect to azure ml service.


Create keySelect the keys tab and copy the key which is generated.


Copy keyClose this tab, copy the application id from this page , this will be used in the Databricks notebook and later Azure Devops project.


Copy application Id5c.

Attach service principal to Azure Machine Learning ServiceYou need to provide contributor rights for this Service Principal on the resource group which has the aml service.

Therefore, go to the resource group in the Azure Portal in which you created your Azure Machine Learning Service.


Resource group with Azure Machine Learning ServiceClick the IAM tab and add the service principal created in the previous step and give contributor rights on this resource group.


Add contributer rights for SP to AMLS instance5d.

Import notebook with AMLS attached to Azure DatabricksIn the prevous part of this tutorial, a model was created in Azure Databricks.

In this part you are going to add the created model to Azure Machine Learning Service.

Go to your Databricks Workspace again, right click, select import and import the a notebook using the following URL:https://raw.



pyAgain, make sure it is attached to a cluster and the cluster is running5d1.

Import AMLS notebookReplace the variables with the values generated in the previous steps just as variables.


Add variables notebooktenant_id="<Enter Your Tenant Id>"app_id="<Application Id of the SPN you Create>"app_key= "<Key for the SPN>"workspace="<Name of your workspace>"subscription_id="<Subscription id>"resource_grp="<Name of your resource group where aml service is"Notice that in a production situation, keys must never be added to a notebook and a secret scope back by a key vault shall be used (see here), but is out of scope for this tutorial and will be dealt with in a next version of this tutorial.

Now run the notebook (either by clicking on Run All buttor or cell by cell by using Shift+Enter5e.

Review results in Azure Machine Learning serviceIn step 5d, a notebook was run in which the results were written to Azure Machine Learning Service.

In this, the following was done:A new experiment was created in you Azure Machine Learning ServiceWith in this experiment, a root run with 6 child runs were the different attempts can be found.

A childrun contains a description of the model (e.


Logistic Regression with regularization 0) and the most important logging of the attempt (e.


accuracy, number of false postives)The model artificact (.

mml) is also part of a childrun.

The artifact of the best childrun can be taken and deployed into production.

Go to you Azure Machine Learning Service instance.

Select the experiment name that was used in the notebook (e.




Find experiment in Azure Machine Learning ServiceNow click on the experiment, click on the run and childrun you want to see ad find the metrics.


Find metrics of a childrun in Azure Machine Learning ServiceWhen you go to output, you will find the model artifact, which you can also download.

The model artifact of the best run will be used as the basis of the containter that is deployed using Azure DevOps in the next part of this tutorial.


Model artifact6.

Build and release model in Azure DevOpsAzure DevOps is the tool to continuously build, test, and deploy your code to any platform and cloud.

In this project, Azure DevOps will be used to deploy the project.

The following needs to be done:6a.

Create Personal Access Token in Databricks6b.

Create Azure DevOps project and add repository6c.

Clone reposity to your local pc6d.

Add variables to code6e.

Create build pipeline6f.

Create release pipeline that creates an HTTP endpoint6a.

Create Personal Access Token in DatabricksTo run Notebooks in Azure Databricks triggered from Azure DevOps (using REST APIs), a Databrics Access Token (PAT) is required for authentication.

Go to Azure Databricks and click to the person icon in the upper right corner.

Select User Settings and then generate a new token.


Generate Databricks Access TokenMake sure to copy the token now.

You won’t be able to see it again.

Token is needed to access Databricks from the Azure DevOps build pipeline later6b.

Create Azure DevOps project and add repositoryCreate a new project in Azure DevOps by following the tutorial below:Create a project – Azure DevOpsCreate a project where developers and teams can plan, track progress, and collaborate on building software solutions.



comClick on the repository folder and select to import the following repository:https://github.


gitSee also picture below:6b1.

Add repository to your Azure DevOps project6c.

Clone reposity to your local pcSelect again repository and choose to clone the repository by clicking the button clone in the upper right corner.


Clone repositoryCreate a directory on your local pc.

Use the following git commands to clone the repository:git clone <<your Azure devops repository, something like https://rebremer.


com/devopsai%20databricks%20final/_git/devopsai%20databricks%20final>>You can also decide to directly clone the repository in Visual Studio Code, the tool that is used to modify the code in the next steps.


Add variables to codeOpen the project in the IDE of your choice, in this tutorial Visual Studio Code is used.

Select the option to open an entire folder.

Then the following files shall be changed:projectdataprepprepDatabricks.


pyprojectdeploy est.

pyprojectservices riggerDatabricks.

pyWith same variables that were also added to the notebook in step 5d.

Import notebook with AMLS attached to Azure Databricks.

tenant_id="<Enter Your Tenant Id>"app_id="<Application Id of the SPN you Create>"app_key= "<Key for the SPN>"workspace="<Name of your workspace>"subscription_id="<Subscription id>"resource_grp="<Name of your resource group where aml service is"domain = "westeurope.


net" # change location in case databricks instance is not in westeuropeDBR_PAT_TOKEN = bytes("<<your Databricks Personal Access Token>>", encoding='utf-8') # adding b'Also here, notice that in a production situation, keys must never be added to a code.

Instead, secret variables in an Azure DevOps pipeline shall be used (see here), but is out of scope for this tutorial and will be dealt with in a next version of this tutorial.

After you added the variables to the code, the python files shall be committed back to your repository.

This can be done directly in Visual Studio Code by selecting the terminal as follows:6d1.

Open terminal in Visual Studio Code for git commandsAnd then execute the following git commands:git statusgit add .

git commit -m "changed variables"git pushFinally, if you want to run and debug the code locally, you have to have python installed.

The following library need to be installed (can also be done using the same terminal):pip install –upgrade azureml-sdk[notebooks,automl]6e.

Create build pipelineIn this step, you are going to create a build pipeline.

Go to https://visualstudio.


com/ and then click on the project which you have created.

Click on Pipelines.


View pipelinesClick builds and then select new build pipeline:6e2.

Create new pipelineAs repository, select the repository you have just created.

As a template, select an empty template, see below6e3.

Select empty templateGive a name to your build pipeline.


Name build pipelineClick on the Agent Job and change the name to something meaningful and also in this demo, choose the agent pool to be Hosted VS2017.


Agent jobSearch for Create Conda in the search box and click on add.


Add Conda Environment to build pipelineClick on the task which was added and change the display name and also check Create a custom environment and name it.

Also specify python version you want to use in this case 3.



Modify Conda environmentNext click on the + again on the agent to add a new task and this time search for a bash script .

This script is going to install dependencies required by our data preparation and training scripts.


Add bash script to build pipelineSelect the script to be executed and also under advanced set the directory to the setup folder.


Modify bashAdd next task , again select + and then search for python script task.

This task adds the Notebook to Azure Databricks and can also be used to upload data to Azure Databricks DBFS or Azure Data Lake Store (ADLS), not done in this tutorial.


Add prep Databricks scriptUnder the script path select the script under project/dataprep/prepDatabicks.

py and then also set the working directory under project/dataprep as advanced option.


Script to copy notebook to Azure DatabricksAdd a new python task and then select the script under project/dataprep/prepDatabicks.

py and then also set the working directory under project/dataprep as advanced option.


Script to run notebook to Azure DatabricksThis script triggers the model training in Azure Databricks and when it has been successfully executed then stores the model artifact into Azure ML Service and also the best model is registered in Azure ML service.

After this is done add the task to copy the artifacts .

This step packages our artifacts which can be used for deployment.

Add a new task and then search for copy files.


Copy files as artifactAdd this task and then set the properties as shown.

Source Folder as : $(Build.

SourcesDirectory)Target Folder as: $(Build.


Publish artifactNext click on save an queue to trigger this build pipeline.


Save and queueOnce this is done you should be able to see the build job and click on builds to see the list of builds running and completed.


Build historyYou can also enable Continuous Integration by going back to the build definition , editing it and under triggers check the continuous integration tick box.


Continuous Build history6f.

Create release pipeline that creates an HTTP endpointThe next part is about creating a release pipeline.

In this, it deploys the Machine Learning Model as a Container on a Azure Container Instance.

Click on the Releases tab and then on new release pipeline6f1.

Create new Release pipelineSelect an empty job and click next6f2.

Select template as release pipelineEnter a descriptive stage name.


Add stage to release pipelineAdd an artifact which you would like to release in this pipeline.

In this case select the artifact created from our previous build pipeline.


Add Artifact to release pipelineNow click on Tasks.

Add a conda env, just like you did before in the build.


Add Conda environment to release pipelineNext step is to add dependencies just like step 2 of build pipe.

Make sure to set the script to be triggered and also the working directory under advanced.


Add Dependencies to release pipelineNext Step is to add a new python script task.

Point the script path to a file named deploy.

py and also set the working directory in advanced options.


Add deploy script to release pipelineThe previous step deploys the model on an Azure Container Instance and then you need to test if the endpoint responds to a test call.

That is what you do in the last step.


Add deploy script to release pipelineSave and close the release pipeline , now release the pipeline.

After a successful run check the logs to see the response of our newly created web service in Azure DevOps.

You can also view the experiment, model and images in Azure Machine Learning Service like was done in part 5.

Manage model in Azure Machine Learning Service.

Notice that expirement name “experiment_model_release” was used in here.

Finally, you can view the endpoint as Azure Container Instance in the portal.

Go to your resource group and find the Azure Container Instance.


Azure Container Instance in Resource GroupIt has a public IP address assigned, such that you can also call to endpoint using a tool like Postmen (payload for calling the endpoint can be found in deploy/test.

py that was used to test the web service).


ConclusionIn this tutorial, an end to end pipeline for a machine learning project was created.

In this:Azure Databricks with Spark was used to explore the data and create the machine learning models.

Azure Machine Learning Service was used to log the models and its metrics.

Azure Devops was used to build an image of the best model and to release it as an endpoint.

This way you can orchestrate and monitor the entire pipeline from idea to the moment that the model is brought into production.

This enables you to answer to question: Why did the model predict this?.

. More details

Leave a Reply