There seems to be a great rush by automakers and personal transportation services to put autonomous driving vehicles on the road as soon as possible. Elon Musk, the tech billionaire founder of Tesla Motors, believes that he can put together a team of automotive engineers to develop a fleet of robotic taxis by the year 2022. Controversial technology and personal transportation firm Uber would like to be first in the self-driving development race, but General Motors and Volkswagen are also investing hundreds of millions of dollars for the purpose of beating both Tesla and Uber.
The biggest concern of the autonomous driving tech challenge is ensuring the safety of drivers, passengers and pedestrians. All the driver safety tips that you are familiar with have already been programmed into the autonomous driving systems currently being tested, but this is not enough. Tesla and Uber are the current leaders of self-driving development, and both companies have already been involved in fatal accidents. In the case of Tesla, a driver died when his electric car slammed into a truck while the autopilot mode was engaged. As for Uber, one of its self-driving taxis killed a pedestrian crossing the road.
It should be noted that human misjudgements were involved in the aforementioned accidents. The Tesla driver should have known that the car’s autopilot mode is not meant to be used as a self-driving solution; it is essentially an advanced version of cruise control that is able to detect other vehicles and apply the brakes when necessary, but it failed to detect the light configuration of a tractor trailer against many other lights on the road. The Uber autonomous taxi actually had a human driver behind the steering wheel, but she did not react in time when the self-driving system failed to detect a pedestrian.
What engineers of autonomous driving technology must realize is that the potential of human error should always be taken into consideration. The current focus of self-driving development is on testing systems by means of completing millions of miles, but technology analysts believe that there should be a deeper focus on figuring out how drivers perceive safety on the road. Thus far, self-driving cars have proven to be better than humans at following the basic rules of the road, but there is more to safety than just rules, and this is where machine learning technology comes into play.
Waymo, the Google technology firm that has achieved greater advancement in self-driving systems, is working on methods that autonomous vehicles can use to learn from human drivers. As a leader in the field of artificial intelligence, Google has been able to program software that can play the ancient game of Go better than human players, and this was achieved through machine learning constructs that assimilated human strategies instead of acting upon statistical probabilities. This is the type of safety training that self-driving systems will adopt in the near future; by learning best practices from human drivers, they will become more effective in terms of road safety.