Self Driving Anki Vector

We mask.

Masking Of Pac-Man SpritesYou can think of the masking we’re going to do here as just cropping the image.

As we just mentioned, there’s a lot of the image we don’t need and masking allows us to make a new image without all of that unnecessary stuff.

First, we need to choose the 4 corners that we’re going to “crop” around.

I looked at the image above and decided that the bottom left corner should be (125,200), top left should be (175,100), bottom right should be (500,225), and top right should be (450,100).

Putting this into a numpy array we have,Next, we need to create a numpy array of the same size as our original image and fill the values that lie inside of our mask with the value 255,Intuitively, you can think of this as making a copy of the image except that it contains a value of 1 in the pixels inside of our mask, and a 0 on every other pixel.

Finally we apply the mask,When we apply the mask, we overlap our original image with the new image containing 1’s and 0’s and the area that overlaps with a 1 will stay the same, while the area that overlaps with a 0 will turn black.

Here’s an example of our original image but masked,Edge DetectionNow that we have our image masked, we can move onto edge detection.

We’ll be running Canny edge detection since it gives the most consistent results.

Canny edge detection deserves a post by itself because of its multi-step process and widespread use so I’ll only be giving a quick overview.

In Canny edge detection, there are 4 main steps:Use a Gaussian filter to smooth the imageCompute the Horizontal and Vertical Sobel Derivatives to find list of possible edgesSet non-edge candidate pixels to blackSet threshold values for filtering list of edges to obtain final edgesIf you want a more thorough overview, you can find it here.

Here’s an example of what an image looks like when Canny edge detection is applied,The best soccer player that’s ever livedNow we will truly see why the masking was necessary.

Take a look at the image below which identifies all the lines without the image being masked,Edge Detection On Unmasked ImageNow compare that to a similar image, but maskedEdge Detection On Masked ImageSee the difference?.As we see here, less lines and information can actually be a good thing and masking will definitely help Vector “focus” on the parts of the image that are necessary.

Now that we have a practical understanding for why we need canny edge detection, we can go ahead and add it to our original code,Centering Vector With The Hough TransformNow we can move in to the last part of our pipeline before we start driving: the Hough Transform.

Here’s the source code,At the core of the Hough Transform is the idea that we can convert our image into Hough space and this will make it easier for us to identify straight lines.

I plan on doing a post on the Hough Transform in the future where I implement it from scratch so stay tuned for that.

For now, I’ll leave you with the OpenCV explanation, which is phenomenal.

Finally, we need to use our HoughLines to center ourselves on the line,The above code is basic and if I’d had more time I’m certain that I could have come up with something better.

All it does is pick the first edge, which is the left-most edge, and then checks to see if that is centered.

If it doesn’t, then it moves to center that edge and vice-versa for the second edge.

However, note that the code above works under the assumption that the two lines detected are the edges of the line.

Of course, this isn’t always the case and again this is something that I’m going to come back and fix soon, likely with a post on tuning and perfecting what we’ve done this post.

TestingLet’s test out our code, here’s a video of how my Vector did,I realize the video’s sideways, but it was late at night, my bad ????.

Besides that, there’s plenty of room for improvement, and the time constraint shows.

However, the next step is to do some tuning on our Hough parameters along with the masking and Canny parameters.

Unfortunately, I ran out of time but this is a chance for you!.How well can you optimize our pipeline?.Feel free to let me know below ????Recap And Full Source CodeLet’s recap what we’ve done,Take a picture with Vector’s cameraTurn the image into grayscaleMask that image so that we only see the “important parts”Run Canny edge detection on the imageRun the Hough Line Transform to give us the edges of our lineSteer using our list of possible edgesAnd that’s it!.Whew!.That wasn’t too bad for a first rough pass at it.

Soon, I’ll be expanding on this post by tuning our parameters and adding corner detection into our pipeline which will allow us to make sharp turns and traverse any closed path imaginable.

And, I’ll also add lane detection so that Vector can drive around the same way autonomous cars do.

See you then!The Full Source CodeGithub Repo.

. More details

Leave a Reply