Tutorial: Build your own custom real-time object classifier

We will be using BeautifulSoup and Selenium to scrape training images from Shutterstock, Amazon’s Mechanical Turk (or BBox Label Tool) to label images with bounding boxes, and YOLOv3 to train our custom detection model.Pre-requisites:1..Go grab a cup of tea ☕ while waiting… oh wait, it’s already done!After Step 1, you should have your raw training images ready to be labeled..Creating image listOpen Terminal in the directory where you have placed all your photos and create a list of all the filenames of your images:ls > image.list4..Generating HITsIn generate.py, change C:/Users/David/autoturk/image.list in line 9 to the local path of your list of image filenames.Change drone-net of https://s3.us-east-2.amazonaws.com/drone-net/ in line 12 to your Amazon S3 bucket name where you've uploaded your images.Change [Your_access_key_ID] in line 14 to your access key ID.Change [Your_secret_access_key] in line 15 to your secret access key.Change drone of LayoutParameter("objects_to_find", "drone") in line 19 to your object name.Change [Your_hit_layout] in line 22 to your HIT's Layout ID.Change [Your_hit_type] in line 24 to your HIT's HITType ID.(Optional) If you are using Sandbox mode, change mechanicalturk.amazonaws.com in line 16 to http://mechanicalturk.sandbox.amazonaws.com.Change https://www.mturk.com/mturk/preview?groupId= in lines 30 and 31 to https://workersandbox.mturk.com/mturk/preview?groupId=.If you are using the normal (non-sandbox) mode, remember to charge up your account balance to pay your hardworking workers!Open Terminal in the directory of generate.py and run:python generate.py6..Retrieving HITsIn retrieve.py, change C:/Users/David/autoturk/hit-id.list in line 16 to the local path of your generated list of HIT IDs.Change C:/Users/David/autoturk/image.list in line 17 to the local path of your list of image filenames.Change [Your_access_key_ID] in line 21 to your access key ID.Change [Your_secret_access_key] in line 22 to your secret access key.Change C:/Users/David/autoturk/labels/ in line 34 to the local path of the directory in which you plan to save the labels (.txt files) for each image.Change drone-net of https://s3.us-east-2.amazonaws.com/drone-net/ in line 48 to your Amazon S3 bucket name where you've uploaded your images.(Optional) If you are using Sandbox mode, change mechanicalturk.amazonaws.com in line 23 to http://mechanicalturk.sandbox.amazonaws.com.(Optional) If you would like to retrieve all annotation .txt files at once without visualization, comment out lines 48 to 61.Open Terminal in the directory of retrieve.py and run:python retrieve.py7..Converting for YOLOYou need to convert the generated labels (.txt files) into the format compatible with YOLO.In format.py, change C:/Users/David/autoturk/labels/labels.list in line 4 to the local path of your list of label filenames.Change C:/Users/David/autoturk/images/ in line 7 to the local path of the directory who you have placed your images.Change C:/Users/David/autoturk/yolo-labels/ in line 11 to the local path of the directory who you will store your reformatted labels.Change C:/Users/David/autoturk/labels/ in line 12 to the local path of the directory who you have placed your labels.Open Terminal in the directory of format.py and run:python format.pyB..AnnotatingFirst, move all your training images to under /Images/001 in the directory of BBox Label Tool.Then, open Terminal in the directory of main.py and run:python main.pyFor more information, please follow the BBox Label Tool usage guidelines.4..Converting for YOLOYou need to convert the generated labels (.txt files) into the format compatible with YOLO.In format.py, change C:/Users/David/autoturk/labels/labels.list in line 4 to the local path of your list of label filenames.Change C:/Users/David/autoturk/images/ in line 7 to the local path of the directory who you have placed your images.Change C:/Users/David/autoturk/yolo-labels/ in line 11 to the local path of the directory who you will store your reformatted labels.Change C:/Users/David/autoturk/labels/ in line 12 to the local path of the directory who you have placed your labels.Open Terminal in the directory of format.py and run:python format.pyAfter Step 2, you should have your images labeled and ready to be trained on YOLO..This tutorial assumes that you already have the labeled images for training or have completed Step 2.0..The reason: Because YOLO is even faster.Instead of applying the model to an image at multiple locations and scales, like conventional approaches, YOLO applies a single neural network to the full image for both classification and localization.Image credit: Ayoosh KathuriaYOLOv3 uses a custom variant of the Darknet architecture, darknet-53, which has a 53 layer network trained on ImageNet, a large-scale database of images labeled with Mechanical Turk (which is what we used for labeling our images in Step 2!)..Installing DarknetOpen Terminal in your working directory, clone the Darknet repository, and build it:git clone https://github.com/pjreddie/darknetcd darknetmakeThen, run./darknetand you should have the output:usage: ./darknet <function>If you are using your GPU (instead of CPU) for training and have CUDA configured correctly, open the Makefile with a text editor:gedit Makefile(or "vi Makefile" if you are a l337 h4x0r)Then, set GPU=1, save the file, and run make again.Remember to run make every time you make changes to files!2..These files basically list the paths of the images in a 9:1 ratio of training images to testing images.Finally, move train.txt and test.txt to the Darknet directory.4..Downloading pre-trained weightsSave the weights for the convolutional layers to the Darknet directory:wget https://pjreddie.com/media/files/darknet53.conv.74Before training our labeled images, we need to first define three files for YOLO: .data, .names, and .cfg.5.. More details

Leave a Reply