Switch To Autopilot

Switch To AutopilotEvolution of the Automobile: Part TwoPatrina BaileyBlockedUnblockFollowFollowingJan 25tl;dr: This blog serves as part two of a series that will ultimately explore the present, potential, and prospects of the automobile.

The first blog touched upon key events in the history of the automobile; explaining their impact on the present.

This blog serves to research a present trend of automobiles: self-driving cars — outlining the basic framework in understanding how self-driving cars operate.

How It WorksAutonomous cars are vehicles that drive without a user.

At its core, it is a machine that intakes inputs from various sensors and cameras, interprets the inputs, and responds in a manner that simulates that of a human driver.

No, better than a human, because one of the goals of autonomous vehicles is to reduce/eliminate human error, such as accidents.

There is much to know about self-driving cars.

So much that there are curriculums built intending to comprehensively explore this subject.

Nevertheless, allow this blog to serve as an overview so that you may have a high-level understanding of the operations of an autonomous vehicle (AV).

Let’s start by simply looking at the different scenarios an AV might find itself in.

The graph illustrates different scenarios and their corresponding speed and complexity.

As you can imagine, driving in a city is far more complex than highway lane-keeping, since there are more variations in input the AV has to take into account.

The various scenarios in themselves prove to be a challenge in programming AVs, but building an AV that encompasses capabilities to maneuver in any scenario is the ultimate challenge.

This framework illustrates how the environment interacts with the core competencies of AV — perception, planning, and control.

First, input from the environment passes through the sensors (camera, radar, lidar, etc.

), then the AV’s computer perceives the characteristics of said input; e.

g.

the white lines that dictate a lane, the car in front of the AV, and the person that’s about to be in the AV’s path.

Perception stage is where data is classified by their semantic meaning, ultimately setting up the dataset that will feed into the planning stage.

Perception also includes localization — when the AV is determining its position with respect to the environment.

The computer will then plan its path, while predicting each input’s trajectory with respect to itself.

The computer then makes a decision and controls its decision through actuators, which is translated back into the environment as a left turn, brake, speed-up, etc.

V2V, or Vehicle-to-Vehicle, is communication from one AV to another.

With V2V, AVs can have prior knowledge on other AVs’ intentions, which would improve the modeling and prediction of AVs’ course of actions.

PerceptionPerception is the fundamental stage for AV and can be equated to all our five senses.

Our five senses give us different types of information, so does perception.

This is possible through the different types of sensors in the AV.

The AV industry uses and studies various sensors; testing for accuracy, affordability, complexity, etc.

, but the main types of sensors used for perception are cameras, radar, and lidar.

Just as we combine the knowledge from all our five senses to increase our knowledge of our environment, combining the input from the sensors accomplishes the same thing; we call this Sensor Fusion.

Cameras are just like our eyes.

The AV takes in streams of images from the environment and determines the types of objects present around it.

Cameras are great for color processing, which adds features to objects, such as scenery.

Cameras are also used for localization in regards to ensuring the car stays in lane.

Studying input from cameras falls under the field of Computer Vision.

Radar, or radio detection and ranging, uses radio waves to detect object location and velocity.

Radio waves, which travel at the speed of light, are emitted from the AV to an object.

This wave is bounced back to the AV once it has hit/reached an object.

It continuously does this, calculates the difference in distance and time to get the velocity.

Radar is great for getting a general sense of the environment, detecting objects in drivers’ blind spots and is sometimes superior to the camera (e.

g.

during bad weather conditions).

However, its accuracy in object’s pin-point location and object type is far less superior to lidar.

Lidar (Light Detection and Ranging) pulsates light and lasers to create 3D maps of its surroundings.

It can send over a million pulses of light/laser per second in order to create the 3D map, or point cloud.

AVs use lidar because it increases awareness of surroundings from human knowledge.

Below is an example of a visualization from lidar.

You can imagine that putting together lidar information with information from radar and camera could incredibly inform an AV.

PlanningThe planning stage is where the AV makes certain decisions based on data from the perception stage.

This is like our brain interpreting what our five senses gave us, and deciding what to do with this information.

There are different types of frameworks, where they can be hierarchical or intertwined depending on the AV schema:Mission planner — responsible for high-level goals like pickup/drop-off locations and which route to take.

Behavioral planner — “makes ad hoc decisions to properly interact with other agents and follow rules restrictions, and thereby generates local objectives, e.

g.

, change lanes, overtake, or proceed through an intersection”.

(Scott Drew Pendleton et al.

16) Since safety is the prominent concern with AV, the accuracy and speed of perception data intake is essential for this part of the planning framework for instances like staying-in-lane, stopping at stop sign, etc.

Motion planner — “generates appropriate paths and/or sets of actions to achieve local objectives, with the most typical objective being to reach a region while avoiding obstacle collision”.

(Scott Drew Pendleton et al.

16) Perception data’s accuracy and speed are also imperative here since this is a process of decisions trying to reach set destination while avoiding obstacles (e.

g.

humans).

Because this process is exhaustive in finding best course of action, it’s afflicted with the “curse of dimensionality” (i.

e.

analyzing high-dimensional data takes up huge amount of time).

Computational efficiency is a notable metric to use in this part of planning due to high-dimensional exhaustive processes having high run times and taking up immense amount of computational power.

ControlThis is where machine meets software.

Decisions made from the planning are translated into measurements like velocities and positions of AV.

This is then fed into the control core competency so that the machine reads force, energy and overall movement.

Furthermore, the control process supervises the systems performance.

“Measurements inside the control system can be used to determine how well the system is behaving, and therefore the controller can react to reject disturbances and alter the dynamics of the system to the desired state.

” (Scott Drew Pendleton et al.

24).

There are different control systems: classical control, model predictive control, trajectory generation and tracking, etc.

; all of which are explained in the cited paper (refer to sources below).

Reading through research on AV, I wondered…can a car be truly autonomous?According to some in this article, no.

Consider the scenarios like sinkholes, impulsive natural forces occurring, any other situation where a car might get stuck, what does a car do then?.In machine learning, there is supervised, unsupervised and reinforcement learning.

These different approaches may be able to handle most real-world scenarios, but what about the rare, but deadly occurrences?.The underlying question of accounting for all scenarios is not restricted to AV; it’s the burden of artificial intelligence as a whole.

Preliminary Dive into PerceptionPerception is the first core competency in the AV framework.

Because your system is only as good as the data you feed it, improvements in this system is of paramount importance, right?.The field of computer vision began in the late 1960s with the goal of having the computer specify 3D objects from a 2D view.

The goal of its start is very much still the same today: imitate the human eye.

The field has made progress from its start, but it still has a long way to go if it wants to achieve the accuracy of humans.

Below is an example of computer vision-edge detection in AVs.

Edge detection is really important for staying in lanes, finding drivable terrain, and for differentiating objects in general.

This is just one example of how computer vision is implemented in AVs.

Edge Detection using OpenCVThis blog served as a high-level teaching for autonomous vehicles.

The next blog in the series will look into the current climate of AV, its market and key players, and its most likely future.

Thank you for reading, I hope you enjoyed it.

Comments and fact-checking are most welcomed.

SourcesPendleton, S.

D.

; Andersen, H.

; Du, X.

; Shen, X.

; Meghjani, M.

; Eng, Y.

H.

; Rus, D.

; Ang, M.

H.

Perception, Planning, Control, and Coordination for Autonomous Vehicles.

Machines 2017, 5, 6.

Three Sensors That "Drive" Autonomous VehiclesWith autonomy slowly taking over the automotive industry, the concept of driverless vehicles is one that's generating a…www.

ecnmag.

comPerception, Planning, Control, and Coordination for Autonomous VehiclesAutonomous vehicles are expected to play a key role in the future of urban transportation systems, as they offer…www.

mdpi.

comAI… And the vehicle went autonomousEvery year, more and more autonomous cars are being tested on our roads.

Some already offer their restricted services…towardsdatascience.

coVideo tutorial on edge detection for AV lanes.

. More details

Leave a Reply