Skip to main content

Concordia researchers dig deep into the software powering self-driving cars

Baidu’s Apollo autonomous driving system faces significant challenges, say Peter Chen and Jinqiu Yang
November 3, 2020
|

Peeking under the hood of a self-driving car will only tell you so much. To really understand what makes them run, you have to dive deep into the software that makes up the autonomous driving systems (ADS) controlling driverless vehicles.

In a new paper being presented next week at the flagship Software Engineering conference (ESEC/FSE 2020), researchers at Concordia look at the ways complex machine learning models work together within the Apollo ADS.

Developed by Chinese tech giant Baidu, the Apollo system is a leader in the ADS field. Its app-based Apollo Go Robotaxi service is already operating in a limited capacity on the streets of three Chinese cities, including Beijing.

“We are trying to open up this black box of an autonomous driving system and explain to academics and software designers how this is designed,” says Tse-Hsun (Peter) Chen, assistant professor of computer science and software engineering at the Gina Cody School of Engineering and Computer Science.

Chen co-authored the paper with departmental colleague assistant professor Jinqiu Yang, former master’s student Zi Peng and associate professor Lei Ma at Kyushu University in Fukuoka, Japan.

Young woman with straight black hair wearing glasses. Jinqiu Yang | Photo by David Ward
Young man with short, black, straight hair, wearing glasses. Tse-Hsun (Peter) Chen | Photo by David Ward

Machine learning in layers

The decision by Apollo’s developers to make their machine-learning software open source provides outside researchers with unique opportunities to study its functionality. (The data remains proprietary.) What they found surprised and impressed them.

“The Apollo system is the most complicated system out there that we could find and have access to,” says Yang.

“It is composed of not just one model for machine learning, but of 28 different components. That does not include the large amounts of code the developers need to get the data from the various pieces of hardware in the system, as well as the code needed to combine all that data in order to make important business-based decisions.”

The researchers describe the system’s architecture, its information flow, the interaction between its code logic module and machine learning models and data gathering, among other features.

Aside from the 28 separate machine learning models, they found:

  • Apollo leverages data from multiple sources to improve the overall performance of machine learning models. These include not only cameras, but also a high-definition map for detailed road information, as well as LiDAR (Light Detection and Ranging) and radar.
  • Machine learning models interact with each other in diverse ways in the ADS. Such interactions are implemented by tremendous code logic, which further complicates the overall integration of an ADS.
  • The code logic that integrates machine learning models often serves as back-ups to or safeguards for other models, ensuring a layered response mechanism designed to optimize passenger safety.
  • However, this introduces increased complications and may affect the performance of other machine learning models.

A technical marvel with human issues

As impressive as the overall design of the Apollo system is, the researchers have identified room for improvement. For instance, safety issues arising from a lack of inter-vehicle communication need to be resolved.

Internet of Things (IoT) technology can address some of those issues, such as sudden lane changes. But without widespread coverage its response time may lag behind that of human drivers, who can anticipate movements by making eye contact with other drivers.

“There are so many technical challenges out there, but I think many of them can be overcome in the next decade,” says Yang.

“But when it comes to the social aspects of the ADS — such as privacy and all the legal issues around responsibility — I think we are still a long way off.”
 

Read the paper: A First Look at the Integration of Machine Learning Models in Complex Autonomous Driving Systems.

 



Trending

Back to top

© Concordia University