This site uses cookies for analytics, personalized content and ads. By continuing to browse this site, you agree to this use.

Subscribe

SLAM: Self-driving cars, AI and mapping on the move

13th August 2015

Self-driving cars are one of the biggest tech stories of the year: a futuristic concept that looks tantalisingly close to becoming a consumer reality in our lifetimes. But how much do you know about how they really work?

At SwiftKey’s recent AI meetup, Samantha Ahern, Information Security Officer at University College London and MSc Intelligent Systems student at DeMontfort University, gave us an insight into her passion: the ways Artificial Intelligence can be applied to let autonomous vehicles know where they are and what they’re about to bump into.

Please allow cookies to view this content

This complex process is known as “SLAM” – Simultaneous Localization and Mapping. It’s how autonomous mobile robots and vehicles work out where they are, and what’s in front of them, in real time.

Samantha opened her talk by playing a video of an autonomous car, created by Delphi, that drove from San Francisco to New York all by itself, save for 40 miles where the company couldn’t get permission to let the machine take control.

Please allow cookies to view this content

Delphi compares their car’s self-driving system to the way humans use their senses to make sense of their surroundings – we are constantly processing and updating the information we receive. The challenge with self-driving cars is how to connect the ‘senses’ to the car’s equivalent of the ‘brain’, in this case, the central controller.

When the Delphi vehicle was on the highway, it didn’t have a map of exactly where the other obstacles and vehicles were, it had to develop one as it drove.

Samantha has her own miniature version of an autonomous vehicle, a robot called Johnny (pictured top), and uses this as her testbed for developing and improving its ability to know where it is and produce a map in real time. As with the Delphi car, it has to know where it is in order to navigate.

Challenges with SLAM for self-driving cars

There are different types of SLAM, from feature-based to pose-based, Samantha explained.

Mapping is all about working out ‘is there something there or isn’t there?’. Vehicles can use sonar and radar, which bounce off objects to confirm there is something there and roughly what it ‘looks’ like in terms of shape and size.

SLAM is probabilistic, meaning the technology merely estimates the likelihood of something being the case, taking sensor data and establishing what the chances are there is an obstacle in the way.

With localization, there are several techniques. Histogram filters can be useful, but don’t work when the vehicle is moving. However, Kalman and Particle filters give you a sense of where you are and take into account the fact the vehicle has moved. All this requires “nasty Bayes estimators,” according to Samantha, and the full details of the maths can be found in the slides below.

Please allow cookies to view this content

There are a number of difficulties when working with SLAM. For instance, the data can be ‘noisy’ and incomplete. There is dynamicity (things are moving all the time) and it requires taking discrete measurements in real time.

However, Samantha explained that the hardest thing of all is the ethical dimension. We can use maths to prove that the vehicle is doing what we think it’s doing. But how do you determine what actually is the right thing to do?

Programming self-driving cars and other vehicles (even drones) requires prioritisation. For example, if you’re determining what to swerve to avoid, it seems straightforward to prioritise humans first and foremost, then animals, then objects. But how do you categorise a baby’s buggy? It’s an object – but it could contain a human, it all depends on the context of how the object is being used.

Should you always prioritise saving the greatest number of lives? For instance, your self-driving car may need to drive you over a cliff-face to certain death, if it calculates that breaking suddenly would cause a fatal pile-up of multiple vehicles behind you. The morality gets messy, and ultimately a human is required to make a judgment decision.The current legal situation is vague: decisions made are currently on a case by case basis, but this may evolve over time.

Samantha’s argument echoed that of Weave.ai founder Stephane Bura (you can read his argument regarding why the future of Artificial Intelligence is interactivity on the blog). Both agree that the technical ability for all these ideas already exists – but human attitudes are lagging behind. Many people are fundamentally uncomfortable about the prospect of self-driving cars taking decisions, even when faced by the evidence that humans are terrible drivers: 93% of car accidents are down to human error.

When asked by a member of the audience when she expects SLAM mapping and localization technologies to reach ‘normal’ cars, Samantha replied that the capability is already there, it’s just not being fully utilized. It will be more evolution than revolution, she predicts. After all, our mobile phones have changed from being about making phonecalls to being mobile computers used to write, create, play and pay. Our cars are set for the same steady but inevitable transformation.

Leave a Reply