Toyota details self-learning map system for autonomous cars

Toyota details self-learning map system for autonomous cars

Toyota has detailed its independent approach to map creation for future autonomous vehicles.

Most if not all major automakers are expected to rely on extensive cartography to help guide autonomous piloting technology, allowing live sensor input to be compared against known data as the vehicle attempts to determine its exact location and safely maneuver.

Satellite images, known road locations and other data used for simple GPS guidance systems can provide enough information to help a driver steer the vehicle on the intended path, however fully autonomous vehicles typically require a much deeper level of location information and point-of-view imagery.

Audi, BMW and Daimler recently partnered to purchase Nokia’s digital mapping division, which compiled data from a fleet of LiDAR-equipped vehicles that scanned roads and surrounding structures to an accuracy of 10 to 20 centimeters.

Rather than solely relying on third-party data, Toyota plans to use its fleet of capable production vehicles to essentially build its own crowd-sourced database. A vehicle’s integrated cameras and GPS sensors collect road images and location information on the fly. Data is uploaded to Toyota’s servers, which automatically combine, correct and update high-precision road maps across a wide area.

Toyota suggests an understanding of road layouts and traffic rules, such as speed limits and other signage, is ‘essential’ for the implementation of automated driving technologies. Taking an indirect dig at the Germans’ $2.7 billion purchase of Nokia map data, the Japanese automaker argues that third-party LiDAR information is of limited usefulness due to the “infrequent nature” of the data-collection method.

“While a system relying on cameras and GPS in this manner has a higher probability of error than a system using three-dimensional laser scanners, positional errors can be mitigated using image matching technologies that integrate and correct road image data collected from multiple vehicles, as well as high precision trajectory estimation technologies,” the company notes.

After several vehicles provide images from various angles, Toyota suggests the resulting processed data can reduce the margin of error to a maximum of 5 cm on straight roads.

The company suggests the system will become a core element of semi-autonomous vehicles that will enter production around 2020, initially focusing on highways before expanding to ordinary roads.

Tesla has already implemented a similar system for its ‘Autopilot’ technology, logging data from Model S’ camera, radar, ultrasonic and GPS sensors, on a fleet-wide basis, to build a system that is “continually learning and improving upon itself.”

The processing systems used for autonomous vehicles are expected to increasingly rely on advances in ‘machine learning’ to better mimic the human brain’s own ability to deal with unique situations that may never have been specifically encountered during millions prototype development miles.

Many of Toyota’s rivals expect to offer semi-autonomous capabilities by the end of the decade, enabling vehicles to handle highway driving from the on-ramp to the off-ramp. Fully autonomous vehicles could take another decade to bring to market.