Getting Started with Microsoft’s Custom Vision API

Hopefully this tutorial will help you use this awesome time-saving API at your next hackathon!You can find the completed code at https://github.

com/Cheryl-Lao/BrainihaxStep 1: Set up the projectVisit https://www.

customvision.

ai/ and sign in with your Azure accounthttps://www.

customvision.

ai/Accept the terms that you definitely readClick this to create a new project!You can name your project anything you like, but the Project Type should be Object Detection and Domains should be General.

This is because we want to detect whether or not we see eyes with leukocoria in the images.

Step 2: Add and tag imagesClick on the “Add images” button and add the images that you want to put in your dataset.

For this project, I just googled “Leukocoria” and went through the image results.

If you’re having trouble finding a large enough dataset, you can mirror and rotate existing images to grow your collection of images.

Now, highlight the eyes with leukocoria by clicking and dragging to create a bounding box.

Here, I’ve labelled eyes as L (leukocoria) and N (non-leukocoria).

Repeat this process until you have at least 15 of each kind of eye (15 is the bare minimum that the Custom Vision API will accept.

You will probably need over 100 tagged eyes of each type to get a decently accurate model).

Step 3: Train your modelOnce you have a sizable training set, click on the “Train” button to train your model.

This might take a few minutes, depending on the number of training images you have.

After training, you should get this page (under the “Performance” tab) that tells you the details of how your model is performing.

Step 4: Try your predictionsNow that you’ve trained your model, you can test out its performance on test images, straight from the Custom Vision interface.

Click on “Quick Test” to pull up the testing windowEnter an image URL or upload an image to see the magic!Step 5: Write a script to access your predictionsFor this section, you will need to have python installed on your computer.

5a) Get your Keys and IDsNow that you have your model trained, it’s time to start writing a python script to access this data!.Let’s start off with setting our access keys and query information.

You will need the endpoint and prediction ID.

To get your keys and IDs, click on “Prediction URL”.

Keep a note of all the information in the “If you have an image URL” section.

In the code below, I extracted the project ID and iteration ID to make it easier to replace later, but you could just keep the endpoint as it is displayed in the window.

project_id = ‘922a5f49-caba-4765–9c93-f477802076ef’iteration_id = ‘319f3179–6b76–4c44-bfee-2703f527d34d’prediction_key = ‘0cda307794064acca38bb0f860932935’endpoint = ‘https://southcentralus.

api.

cognitive.

microsoft.

com/customvision/v2.

0/Prediction/{0}/url?iterationId={1}'.

format(project_id, iteration_id)5b) Set up your headers and body for the HTTP request# HTTP request to send to the APIheaders = { # Request headers 'Prediction-Key': prediction_key, 'Content-Type': 'application/json'}#Gets the first argument as the url of the picture to processbody = {'url' : sys.

argv[1]}5c) Send the requestThe code below has a try-except block in case the HTTP request fails for any reason.

It also has a sleep statement so that we don’t try to access the data before it’s ready (there are other ways to do that, but this was the shortest way).

#Try sending the image to the CV APItry: print('Getting response.

') response = requests.

request('POST ', endpoint, json=body, data=None, headers=headers) #2__ is the success status code if not str(response.

status_code).

startswith("2"): parsed = json.

loads(response.

text) print ("Error:" + str(response.

status_code)) exit() #It will take a little bit of time to load so just make the user wait time.

sleep(3) # Contains the JSON data.

The following formats the JSON data for display.

parsed = json.

loads(response.

text) predictions = parsed['predictions'] eye1, eye2 = categorize_eyes(predictions) print(report_diagnosis(eye1, eye2)) #Catch any exceptions that might happenexcept Exception as e: print('Error:') print(e)I wrote some code to take the bounding boxes with the highest confidence ratings and return which eye might have Leukocoria, based on the tags and their confidence levels (categorize_eyes(predictions)).

report_diagnosis(eye1, eye2) tells you if the left or right eye is affected.

I won’t be going into detail on that code because it’s not , but it’s found in https://github.

com/Cheryl-Lao/Brainihax/blob/master/image_identifier.

py5d) Run the script!Now all you have to do is open up command line and run the script by giving it an image URL.

Example of script outputHope that tutorial helped make computer vision a little bit more accessible!.

. More details

Leave a Reply