| |

The Technology Behind Autonomous Vehicles: A Deep Dive

However you feel about autonomous vehicles, they exist. Personally, I think they have a long way to go before we can say that they’re “ready for prime time,” but they’re on some roads now. The technology that goes into making them is fascinating, and that’s what we’re looking at today. We’ll get into the cameras, the LiDAR, Radar, Ultrasonic sensors, and the sensor fusion and controls that all have to work together seamlessly in order to allow the vehicles to do their thing without having to be directed by a human at every step. Specifically, we’ll examine the lane keeping function.

THE COMPONENTS

The cameras detect lane markings on the road. It captures images of the lane and processes them using computer vision algorithms. It identifies lane boundaries, like the solid and dashed lines, and those reflective dots embedded in the road surface. Unfortunately, the cameras may have limited performance in poor weather or low light.

Light Detection and Ranging (LiDAR) sensors provide depth information. They create a 3D point cloud of the surroundings, including the road surface and nearby objects. By analyzing the collected data, the car can determine the position of the lane relative to the vehicle, or the position of the vehicle relative to the lane. LiDAR is pretty expensive, and it may have reduced effectiveness in bad weather.

Radio Detection and Ranging (RADAR – did you know that’s what RADAR stood for?) uses radio waves to complement the functions of the camera and LiDAR. It detects moving vehicles in the adjacent lanes. If a vehicle approaches too closely, the RADAR triggers a warning to the driver, or it adjusts the car’s position autonomously within the lane. However, RADAR has limited resolution and may suffer from potential interference issues.

Ultrasonic sensors monitor the car’s distance from the lane markings. If the car drifts too close to the edge, the sensors provide feedback to the driver or to the autonomous driving system. The car’s control system can make steering adjustments to stay centered in the lane. They have limited range and resolution, so they can only be used for close-range actions.

Sensor fusion and control components fuse all the date from all the car’s sensors. Algorithms analyze the aggregated information to determine the best steering angle, and the car makes the necessary adjustments to stay within the lane boundaries.

Maps provide detailed information about the locale – road layouts, landmarks, and traffic rules. Because road situations change, they require frequent updates, and it takes a lot of space to store this data.

Global Positioning System (GPS) and Global Navigation Satellite System (GNSS) enable positioning and navigation information. The can be subject to signal degradation in urban canyons and tunnels.

THE FUNCTIONS

Autonomous vehicles rely on machine learning and AI to process and interpret visual data from the camera. However, in order to be effective at it, these systems require a lot of data and a lot of computational power.

Neural networks imitate human brain functionality to make decisions based on sensor data. We need to understand and remember, though, that these neural networks aren’t “deciding,” actually. They’re using “if-then” processes that are programmed into the system. The system can’t really detect the best course of action among several choices, because that’s a subjective process. It can only take a particular action if a particular trigger occurs.

Simultaneous Localization and Mapping (SLAM) helps vehicles map their environment and track their position at the same time. These processes are computationally intensive (that’s a great phrase!), and the systems can struggle in dynamic environments. Dynamic environments is a shorthand way of saying any environment that undergoes frequent changes; but that level of change can occur in a lot of different environments. For example, in cities, pedestrians and constantly moving traffic can make it hard for the vehicle to maintain accurate localization (determining its position) and mapping. Construction sites, crowded public spaces like airports and shopping malls, and even forests with swaying trees and bodies of water with waves can cause the same type of problems.

Path planning algorithms seek out the optimal route. Notice I didn’t say “best” route, because, again, that’s subjective. However, programming the system to use the fewest left turns possible, to avoid toll routes, and to select the route that minimizes mileage among all the stops are all objective; those things are quantifiable and programmable. That’s an optimal route.

Motion control ensures smooth and safe vehicle maneuvers, but it requires precise tuning and it can be affected by mechanical limitations.

Connectivity is facilitated by Vehicle-to-Everything (V2X) communication and 5G networks. Both of these enable communication with other vehicles and infrastructure, but there are security concerns, infrastructure support requirements, and high deployment costs that need to be addressed.

YOUR TURN

I’m not fully convinced that we’re close to a society that moves around in a sea of only autonomous vehicles. There are still too many accidents and it’s still to tempting to let the car make all the decisions. Before I’m convinced, one of the things I need to see is how it reacts to a bird eating a dead animal in the road and suddenly taking flight as the car approaches closely.

What do you think? How close to his future are we, and how do you feel about it? Drop a comment below.

Similar Posts