Simple way to deploy machine learning models to cloud

It’s easy to see that having a web-service running locally would be a very bad idea.

So, we wish to host the web-service somewhere on the internet to fulfil the requirements we listed.

For this blog, we choose to host our service on an AWS ec2 instance.

As a prerequisite, one needs to have an AWS account for using the ec2 instance.

For new users, there are several AWS resources that are available for free for a period of 1 year (usually up to some limit).

In this blog, I would be using a ‘t2.

micro’ ec2 instance type which is free tier eligible.

For users who have exhausted their AWS free-tier period, this instance costs around 1 cent(USD) per hour at the time of writing this blog; a super negligible amount to pay.

Let’s start with the process.

Log into the AWS management console and search for ec2 in the search bar to navigate to EC2 dashboard.

Search for ec2 on the aws management consoleLook for the below pane, select ‘Key Pairs’ and create one.

Select Key Pairs for looking at existing key pairs and creating new onesThis will download a ‘.

pem’ file that is the key.

Save this file somewhere safely.

Now navigate to the location of this file on your system and issue the below command with key file name replaced by yours:chmod 400 key-file-name.

pemThis commands changes permissions on your key pair file to private.

The use of key pairs will be explained later.

Next, click ‘Launch Instance’ on the EC2 dashboard:Launch ec2 instanceChoose the Amazon Machine Instance (AMI) from the list of options.

An AMI determines the OS that the VM will be running (plus some other stuff we don’t care about at this point).

For this blog, I chose ‘Amazon Linux 2 AMI’ which was the default selection.

Choosing AMIThe next screen allows you to select the instance type.

This is where the hardware part of the VM can be selected.

As mentioned previously, we will work with ‘t2.

micro’ instance.

Selecting instance typeYou can select ‘Review and Launch’ that takes you to ‘Step 7: Review Instance Launch’ screen.

Here, you need to click the ‘Edit Security Groups’ link:Security GroupsYou now have to modify the security group to allow HTTP traffic on port 80 of your instance to be accessible by the outside world.

This can be done by creating a rule.

At the end, you should end up with such a screen:Adding HTTP rule to security groupIn the absence of this rule, your web-service will never be reachable.

For more on security groups and configuration, refer AWS documentation.

Clicking on the ‘Launch’ icon will lead to a pop up seeking a confirmation on having a key-pair.

Use the name of the key pair that was generated earlier and launch the VM.

You would be redirected to a Launch screen:Launch Status of ec2 instanceUse the ‘View Instance’ button to navigate to a screen that displays the ec2 instance being launched.

When the instance state turns to ‘running’, then it is ready to be used.

We will now ssh into the ec2 machine from our local system terminal using the command with the field public-dns-name replaced with your ec2 instance name (of the form: ec2–x–x–x–x.

compute-1.

amazonaws.

com) and the path of the key pair pem file you saved earlier.

ssh -i /path/my-key-pair.

pem ec2-user@public-dns-nameThis will get us into the prompt of our instance where we’ll first install docker.

This is required for our workflow since we will build the docker image within the ec2 instance (There are better, but slightly complicated alternatives to this step).

For the AMI we selected, the following bunch of commands can be used:sudo amazon-linux-extras install dockersudo yum install dockersudo service docker startsudo usermod -a -G docker ec2-userFor an explanation of the commands, check the documentation.

Log out of the ec2 instance using the ‘exit’ command and log back in again using the ssh command.

Check if docker works by issuing the ‘docker info’ command.

Log out again or open another terminal window.

Now let’s copy the files we need to build the docker image within the ec2 instance.

Issue the command from your local terminal (not from within ec2):scp -i /path/my-key-pair.

pem file-to-copy ec2-user@public-dns-name:/home/ec2-userWe would need to copy requirements.

txt, app.

py, trained model file and Dockerfile to build the docker image as was done earlier.

Log back into the ec2 instance and issue ‘ls’ command to see if the copied files exist.

Next, build and run the docker image using the exact same commands that were used in the local system (Use port 80 at all locations in the code/commands this time).

Hit the home endpoint from your browser using the public dns name to see the familiar ‘Hello World!’ message:Home endpoint works from the browser (I used my ec2 public-dns-name in the address bar)Now send a curl request to your web-service from local terminal with your test sample data after replacing the public-dns-name by yours:curl -X POST public-dns-name:80/predict -H 'Content-Type: application/json' -d '[5.

9,3.

0,5.

1,1.

8]'This should get you the same predicted class label as the one you got locally.

And you are done!.You can now share this curl request with anyone who wishes to consume your web-service with their test samples.

When you no longer need the web-service, do not forget to stop or terminate the ec2 instance:Stop or terminate the ec2 instance to avoid getting chargedSome additional thoughtsThis is a super basic workflow intended for ML practitioners itching to go beyond model development.

A huge number of things need to be changed to make this system into one that is more suited to a real production system.

Some suggestions (far from complete):Use a Web Server Gateway Interface (WSGI) such as gunicorn in the flask app.

Bonus points for using nginx reverse proxy and async workers.

Improve security of the ec2 instance: The service is currently open to the entire world.

Suggestions: Restrict access to a set of IPs.

Write test cases for the app: Software without testing = Shooting yourself in the leg and then throwing yourself in a cage full of hungry lions all the while being pelted with stones.

(Moral: Do not float a production software without thoroughly testing first)A lot more could be added to the above list, but maybe that’s something for another blog post.

Github repo: https://github.

com/tanujjain/deploy-ml-modelIt would be great to hear your feedback!.. More details

Leave a Reply