Python plays Counter Strike GO(Part 1)

Python plays Counter Strike GO(Part 1)Hrishikesh SaikiaBlockedUnblockFollowFollowingJun 11Counter Strike is one of the most popular first person shooter games.

The game pits two teams against each other: the Terrorists and the Counter-Terrorists.

Both sides are tasked with eliminating the other while also completing separate objectives.

After spending several hundred hours killing Terrorists and defusing bombs, I thought why not create a bot and check how well it fares against other human players.

(PS: We dont encourage cheating nor the use of aim bots and other hacks.

Keep the game clean).

In this article we will try to train our own convolutional neural network object detection classifier starting from scratch, that can detect the players in the game.

In the subsequent articles, we will use OpenCV and PyautoGUI to automate the bot.

Pre-Requisite:TensorFlow -GPU: Will not go into detail how to install this.

Follow this beautifully explained video and you are good to go:2.

Set up TensorFlow Directory and Anaconda Virtual Environment.

The Process:Part 1:Detection:We will first create an object classifier that can detect the Counter-Terrorist and Terrorist players in the game.

For this we created a dataset of players, the photos are taken from a dataset we found online and combined it with photos that we have gathered(in-game screenshots and Google).

The Dataset in available here: https://github.

com/Hrishi321/Python-Plays-CS.

Download TensorFlow Object Detection API Github repository :https://github.

com/tensorflow/modelsCreate a folder directly in C: and name it “tensorflow1”.

This working directory will contain the full TensorFlow object detection framework, as well as our training images, training data, trained classifier, configuration files, and everything else needed for the object detection classifier.

Download the Object detection models according to your needs:tensorflow/modelsModels and examples built with TensorFlow.

Contribute to tensorflow/models development by creating an account on…github.

comTensorFlow provides several object detection models (pre-trained classifiers with specific neural network architectures) in its model zoo.

Some models (such as the SSD-MobileNet model) have an architecture that allows for faster detection but with less accuracy, while some models (such as the Faster-RCNN model) give slower detection but with more accuracy.

We are using the Faster-RCNN-Inception-V2 model.

Extract the downloaded faster_rcnn_inception_v2_coco_2018_01_28.

tar.

gz (in our case) file folder to the C: ensorflow1models
esearchobject_detection folder.

Download this wonderful repository located on this page and extract all the contents directly into the C: ensorflow1models
esearchobject_detection.

https://github.

com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10 .

Many of our object detection scripts are taken from here, with slight modifications to suit our requirements.

Since we want to train your own object detector, delete the following files (do not delete the folders):All files in object_detectionimages rain and object_detectionimages estThe “test_labels.

csv” and “train_labels.

csv” files in object_detectionimages3.

All files in object_detection raining4.

All files in object_detectioninference_graphAnnotate the images using LabelImg.

This process is basically drawing boxes around your objects in an image.

LabelImg GitHub link & LabelImg download linkLabelImg in action.

LabelImg saves a .

xml file containing the label data for each image.

These .

xml files will be used to generate TFRecords, which are one of the inputs to the TensorFlow trainer.

Once you have labeled and saved each image, there will be one .

xml file for each image in the est and rain directories.

We now generate the TFRecords that serve as input data to the TensorFlow training model.

We use the xml_to_csv.

py and generate_tfrecord.

py scripts from Dat Tran’s Raccoon Detector dataset, with some slight modifications to work with our directory structure.

First, the image .

xml data will be used to create .

csv files containing all the data for the train and test images.

From the object_detection folder, issue the following command in the Anaconda command prompt:(tensorflow1) C: ensorflow1models
esearchobject_detection> python xml_to_csv.

pyNext, open the generate_tfrecord.

py file in a text editor.

Replace the label map starting at line 31 with our own label map, where each object is assigned an ID number.

# TO-DO replace this with label mapdef class_text_to_int(row_label): if row_label == 'c': return 1 elif row_label == 'ch': return 2 elif row_label == 't': return 3 elif row_label == 'th': return 4 else: return NoneThen, generate the TFRecord files by issuing these commands from the object_detection folder:These will be used to train the new object detection classifier.

python generate_tfrecord.

py –csv_input=images rain_labels.

csv –image_dir=images rain –output_path=train.

recordpython generate_tfrecord.

py –csv_input=images est_labels.

csv –image_dir=images est –output_path=test.

recordCreate Label Map and Configure Training:The label map tells the trainer what each object is by defining a mapping of class names to class ID numbers.

item { id: 1 name: 'c'}item { id: 2 name: 'ch'}item { id: 3 name: 't'}item { id: 4 name: 'th'}See this was so easy!EZ!Now its time to run the Training:From the object_detection directory, issue the following command to begin training:python train.

py –logtostderr –train_dir=training/ –pipeline_config_path=training/faster_rcnn_inception_v2_pets.

configIf everything has been set up correctly, TensorFlow will initialize the training.

Training the model.

The training routine periodically saves checkpoints about every five minutes.

You can terminate the training by pressing Ctrl+C while in the command prompt window.

I typically wait until just after a checkpoint has been saved to terminate the training.

You can terminate training and start it later, and it will restart from the last saved checkpoint.

The checkpoint at the highest number of steps will be used to generate the frozen inference graph.

Export Inference Graph:python export_inference_graph.

py –input_type image_tensor –pipeline_config_path training/faster_rcnn_inception_v2_pets.

config –trained_checkpoint_prefix training/model.

ckpt-XXXX –output_directory inference_graphThis creates a frozen_inference_graph.

pb file in the object_detectioninference_graph folder.

The .

pb file contains the object detection classifier.

Using the Newly Trained Object Detection Classifier:To test our object detector, move a picture of the object or objects into the object_detection folder, and run Object_detection_image.

py, using the suitable image name.

Counter Terrorist Detection.

Terrorist Detection.

Conclusion:We are able to detect the Counter-terrorist and Terrorist players successfully.

In the next article we will try to automate the shooting process.

GoodBye!.. More details

Leave a Reply