Machine Learning: The engine inside Tesla’s automated driving technology

Tesla is using machine learning to enhance its Autopilot software and usher in the future of autonomous driving.

Introduction

 

Since its founding in 2003, Tesla has consistently played a nonconformist role in the automobile industry with its big bet on electric cars, pursuit of self-driving technology, and brilliant but eccentric CEO Elon Musk. In recent years, its futuristic vision has increasingly become the norm of the auto industry as a whole—and with this shift has come a record-setting 2018 Q3 with $312M of net income.[1] Tesla’s growth trajectory currently sits at an inflection point, and in order to capitalize on the current momentum it is imperative that Tesla properly leverages machine learning technology for autonomous driving.

 

Machine Learning and Autonomous Driving

 

It is not an exaggeration to state that every single vehicle capable of autonomous driving is an embodiment of machine learning technology. The U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) currently recognizes five levels of autonomous driving, each with varying degrees of capability and required human intervention.[2] Within this framework, the two most critical features are: (1) the vehicle itself controlling all monitoring of the environment (required for Level 3 and above), and (2) the complete lack of need for human attention, even during complex traffic conditions (required for Level 5). At the core of both features lies machine learning—the algorithms installed on these vehicles’ CPU/GPUs must be trained with data compiled from hundreds of thousands of miles on the road such that they can recognize certain environments as imminent danger (e.g. swerve around potholes) and ultimately calculate the course of action with speed and precision. In this sense, machine learning is critical to Tesla’s (and every auto company’s) current product development process.

 

Autopilot: Tesla’s Vision for the Future

 

Tesla first released its autonomous driving package, Autopilot, on October 9, 2014.[3] Ever since, it has aggressively expanded the frontier of autonomous driving with the backing of a ~$1B annual R&D budget.[4]

 

One aspect of Tesla’s machine learning program that sets it apart from its competitors is the fact that the entire Tesla fleet operates as one wirelessly connected network. Therefore, when one vehicle learns something new from additional data inputs, every Tesla vehicle can instantly share in its improvements. In this sense, Tesla vehicles are similar to smart connected gadgets such as the thermostat Nest.[5]

 

Ultimately, Tesla aspires to achieve fully autonomous navigation (Level 5) in the near future. It has already rolled out HW2, the hardware package necessary to support the sensory/monitoring aspects of Level 5 automation.[6]HW2 went into production in October 2016 and includes 8 surround cameras as well as 12 ultrasonic sensors.[7]

 

Therefore, a critical short-term focus area for Tesla is the development of its own in-house chips to support machine learning computations. Back in August, Musk announced that Tesla plans to replace Nvidia’s AI chips—the Drive PX2 platform currently used in Autopilot 2.5—with its own that can process “over 2,000 frames a second” with full redundancy and fail-over with Tesla’s computer. This shift towards in-house chip development comes at the heels of similar moves made by Apple and Google and highlights the importance of computing power in terms of machine learning capabilities.[8]

 

In parallel, Tesla has been inching towards fully autonomous driving by adding additional features to its algorithms. The most recent update to the Autopilot software supports complex tasks such as automatic lane changing with driver confirmation and on-ramp/off-ramp transitions on freeways.[9]Due to the specialized nature of these tasks, these enhancements require extensive and targeted machine learning processes on the back end.

 

Looking Ahead

 

Looking ahead, it is imperative that Tesla takes action in several other dimensions in order to drive meaningful progress in this cutting-edge, highly complex space. As no automaker has yet to achieve full Level 5 automation, the exact computational and algorithmic specifications necessary are still unknown. Therefore, Tesla must be prepared to flexibly and swiftly address any additional product development needs that arise in the machine learning space. In addition, it must collaborate with regulatory bodies to clearly define legal and policy frameworks surrounding machine learning specifically in the context of automobiles.

 

Several open questions remain unanswered on this topic: Is Tesla’s decision to pursue radical, game-changing automation over incremental automation sound? (Competitors such as Audi and Mercedes-Benz have focused their efforts on the latter.) Will we ever see a time when the average human driver entrusts machine learning algorithms with what are potentially life-or-death choices? Should machine learning take priority over other R&D initiatives such as battery improvements and product design?

 

(799 words)

 

 

 

[1] Tesla 2018 Q3 Investor Letter. 2018. Tesla Investor Relations. http://ir.tesla.com/static-files/725970e6-eda5-47ab-96e1-422d4045f799

[2]“Automated Vehicles for Safety.” 2018. U.S. Department of Transportation. https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety#issue-road-self-driving

[3] “Tesla orders 3rd-party survey to prove owners understand ‘Autopilot,’ 98% say they do.” 2016. Electrek.co. https://electrek.co/2016/11/10/tesla-survey-owners-autopilot/

[4]“Tesla Inc. Research and Development Expense.” 2018. Ycharts.com. https://ycharts.com/companies/TSLA/r_and_d_expense

[5] “How Tesla is ushering in the age of the learning car.” 2015. Fortune. http://fortune.com/2015/10/16/how-tesla-autopilot-learns/

[6] “Full Self-Driving Hardware on All Cars.” Retrieved 2016. Tesla.com.  https://www.tesla.com/autopilot

[7]In a move that has been widely debated, Musk has excluded LiDAR sensors from Tesla’s hardware packages. LiDAR sensors provide higher quality visual data, especially in low-light situations, but are more costly than cameras and conventional radars.

[8] “Why Tesla dropped Nvidia’s AI platform for self-driving cars and built its own.” 2018. Forbes.com. https://www.forbes.com/sites/jeanbaptiste/2018/08/15/why-tesla-dropped-nvidias-ai-platform-for-self-driving-cars-and-built-its-own/#57336a946722

[9] Tesla 2018 Q3 Investor Letter. 2018. Tesla Investor Relations.http://ir.tesla.com/static-files/725970e6-eda5-47ab-96e1-422d4045f799

Previous:

Size You: Can Adidas use 3-D Printing to Deliver Custom Sized Shoes to the Masses?

Next:

Listen Up: Spotify, Machine Learning, and the Podcast Opportunity

Student comments on Machine Learning: The engine inside Tesla’s automated driving technology

  1. I think Tesla pursuing many parallel technologies is essential for its competitive advantage and maintaining an edge in the driving world. I like that Elon Musk thinks big and pushes for revolution over evolution. Though riskier, it makes for changes that truly add value for the consumer and drives innovation across the sector. I don’t think people will fully trust autonomy, though, until the tech proves itself not just safer than humans, but SIGNIFICANTLY safer.

  2. Interesting view of Tesla’s approach to this space and, in particular, the bets it is placing on chip development. Another notable bet that the company is making is around the use of lidar technology – Tesla’s hardware suite (including the new HW2) lacks the lidar sensors that most other autonomous driving programs claim is an essential ingredient on the road to level 5 autonomy. While contentious, Tesla’s stance is that humans can navigate our roads with only the use of cameras (our eyes) and software (our brains) so the cars should be able to do the same and it’s just a matter of getting the software piece right. As the team continues to push the envelope in their Autopilot releases, it will be interesting to see how the R&D into chips and the choice of sensors pays off, or if the decisions need further revision to make level 5 a reality.

  3. Interesting article, but the real interesting question/s for me are around implications, not actual rollout timelines and priorities. When autonomous driving does become a thing and when an accident does occur, who becomes liable? Is it the company behind the AI systems? If they are off the hook, who then becomes responsible? Is Tesla required to make modifications based on past accidents?

    The ethical implications of AI in life and death scenarios is the real interesting questions, but the articles does a good job of laying out some of the basic questions.

Leave a comment