The Mercedes-Benz A-Class: What can we say other than what a truly amazing piece of automotive engineering.
With all the talk in the news about the UK leading the driverless car race, have you ever wondered what technology is used to enable an autonomous vehicle to see? How will these cars of the future navigate the roads, obey the road rules and differentiate between a pheasant and a child running across their path?
Dozens of automotive and tech companies are working to bring driverless cars to our roads. From luxury car manufacturers like Mercedes-Benz to the Google car, the race is clearly on and has been running for at least the last five years. There’s a lot to get to grips with – driving control is just a small (and possibly the easiest) part of the puzzle. Enabling a driverless car to see and make sense of what it sees is one of the problems that have engineers and programmers scratching their heads. Below we take a look at the three main types of vision technology for driverless cars.
Starting with the technology that most of us are familiar with, cameras are generally placed on the roof, sides and bumpers of the vehicle. A dozen or more 3D cameras can be used, and sometimes placed in stereo, to see the surroundings – traffic lights, roads signs and the like. They can see in enough detail to recognise a child running onto the road, but they can only see in daylight or what is lit up by your headlights. So they’re only as good as you or me in poor weather conditions.
Used in cars for around two decades now for driver assistance packages, radar is reliable and unhindered by bad weather. It can detect obstacles from 160 metres away, or more, along with the speed and direction they are travelling. It can’t figure out what objects actually are though, so not so great for assisting with map building or object recognition.
Light Detection and Ranging (LiDAR) uses pulsed lasers to measure distances from it to objects around it. Shooting up to 1,000,000 pulses per second, the LiDAR system calculates how long each pulse takes to return, thus creating a map of static and mobile objects around it. It works in day or night conditions and some systems an also detect speed and direction of moving objects. Unfortunately, it doesn’t work nearly as well in rain or fog as the light can be bounced back from the water particles in the air and secondly, most LiDAR sensor systems are prohibitively expensive although various tech companies are working to build reliable and less expensive systems.
Making Sense of the Data
Most car manufacturers venturing into the driverless car arena are using a combination of all the sensors described above, or at least two. Elon Musk of Tesla has famously poo-hoed LiDar, but for now, he’s pretty much on his own.
Regardless of which sensors are used and in what combination, what’s done with the information collected is when things really get interesting. Making sense of the continuous flow of visual input requires a process that some refer to as sensor fusion and others might call Simultaneous Localization and Mapping (SLAM). Regardless of the name or acronym used, the driverless car needs to process the information from the numerous sensors to understand what surrounds it, where it is in regards to those surroundings, the importance of various objects detected (e.g.: small child Vs pheasant) and the best path to get from A to B. Machine learning, Artificial Intelligence and computer neural pathways all come into their own when trying to solve that problem for driverless cars.Back To News