why do self-driving cars looks so weird?


Craziness on the roof, unshapely warts all around, lots of stuff in the trunk. We explain self-driving car components, why you probably shouldn't pay Tesla for "Full Self Driving," and why you should invest in niche opportunities, high-performance vendor offerings, or where cost/form factor is already consumer-ready.

The seductive approach to self-driving cars is to mimic a human: add a couple of cameras looking ahead and computers to process the image. Using this approach, in 1995 a vision-equipped and computer-packed Mercedes S-class made a mostly autonomous 1,000 mile trip.  A few years later, auto camera company Mobileye was founded, but its success came not from enabling autonomy but by providing affordable, limited object-recognition in a single housing combining optics, custom algorithms and some electronics.  While currently used in self-driving prototypes, it was the prospect of wide deployment in consumer vehicles for driver assistance (e.g. lane departure warnings), that led to the $15BN 2017 acquisition of MobileEye by Intel (42x revenue of about $330MM). On consumer vehicles, you can see these cameras around the rearview mirror, with a triangular opening facing forward, and in prototypes along the forward roofline.Today only Tesla thinks a camera-first approach will work for full autonomy. When you hand over the extra $3K for “Full Self-Driving,” you’re betting the eight cameras on a Tesla will eventually prove Elon Musk right against sometimes profane disagreement. Successful vision innovation, like successful “intelligence” in any field, comes when aspirations are highly specific to the applications. Examples include hardware tuned for processing images (Movius, acquired by Intel in 2016 for $355MM), or AIMotive’s ($38MM Series B Financing on January 4) focus on handling noise — the bane of AI — such as that from inclement weather.  

Technology is far short of mimicking the brain’s image interpretation capability, but it’s much more advanced in sensing. So prototypes try to make up for "intelligence" with better-than-eyesight sensing. You're of course familiar with radar which detects objects, including through weather, by transmitting and measuring the reflections of radio frequencies. The stubbornly costly and bulky LIDAR, which is the big contraption on top of the weird-mobiles, does the same thing but uses lasers rather than radio waves. Critically, both provide distance and therefore relative speed information, relieving computers of the struggle of calculating these parameters from camera images.

 

Uber WeirdMobile.png

 

About 20 years ago (notice how long these technologies have been deployed?), car manufacturers started using radar systems (and briefly LIDAR in a few cases) to offer safety and convenience systems, such as speed-variable cruise control and collision avoidance. The radar is installed in the car grill, for example behind the Mercedes “star”. Few people paid for this typically optional equipment and even today few buyers pay for optional safety features.  However, collision avoidance has become table stakes in many markets, and this is driving radar unit volume. Due to commoditization, radar dollar volumes are still a fraction of the $1BN+ of forward-facing cameras. 

Radar is best at detecting metal, is highly reliable in doing so, and is always present in prototype and production vehicles with advanced aspirations. Not hitting metallic objects is a big part of not failing a driving test, so this is great. But wouldn’t it be great to detect a broader range of objects?

Enter LIDAR, developed for mapping which about 10 years ago was adapted for self-driving prototypes by current leader Velodyne (private, $200MM revenues). Google, now Waymo, bet the farm on this technology beginning about 8 years ago with no commercial success, but LIDAR has for years been a must-have for self-driving aspirants (Tesla excluded). In fact, last year GM acquired 3-year old Strobe to enable tighter integration.  Most OEMs are partnered with specific LIDAR manufacturers and have demanded certain technical specs be met in about 18 months to allow the OEMs to subsequently deliver on autonomous vehicle promises. Indications are that none of the LIDAR providers can make it. 

The advantages of LIDAR include centimeter-level accuracy and, due to the frequency at which its laser operates, an ability to “see” mostly the same things a human does.  Beyond the $1,000s to $10,000s costs, a LIDAR disadvantage is that the laser and optical housing are bulky, and the rotating mechanism to allow a 360-degree view (just like the radars in movies) even more so.  These systems are easy to spot, as they typically have a custom perch atop the vehicle, and best-case look like the siren on a cartoon police car. While the large, rooftop LIDARs in early prototype vehicles have superior fields of view, many newer prototypes deploy compact LIDARs on top and others along the side of the vehicle (sometimes on an extension). Each unit's limited field of view is combined, resulting in a better view of low-lying and proximate objects. System cost is greater, though, since optical components are duplicated. Innovation is directed at the perennially elusive goal of achieving consumer-vehicle form factors and costs.  Examples include Velodyne; leading private company Quanergy (Daimler; Delphi) which has a long and well-funded history but no known working deployments to our knowledge; and more recent arrival TriLumina ($36MM in May 2017).  Short-range and integrated alternatives include FusionSens and Oryx. 

All input from LIDARs, cameras, etc. comes at the cost of processing enormous amounts of data. Remember the “stuff in the trunk”? There you'll see a carry-on sized computing enclosure. Thanks to a stroke of spectacular mathematical good fortune for NVIDIA, it’s typically their processor. More on this later. 

If all this weren't enough, here's a few more.  Almost all self-driving prototypes have a gyroscope in the trunk to keep track of the car’s orientation. For example, when making a 90-degree turn, it’s the gyroscope that helps the car understand the rate of turn: the maneuver outstrips the tracking capability of other sensors. Smartphones have highly compact gyros, but the accuracy and robustness required for auto applications mean successful vendors have experience with aviation and military applications: KVH and Gener8. In addition to all this, weird-mobiles need the occasional precision GPS, precision maps, and many hours of training the vehicle over specific routes.

Hopefully, it's clear that despite years of experience and deployment, advanced autonomy will remain relegated to applications that can handle the cost and form-factor of the required components. What's encouraging is that when trained extensively over specific routes, the vehicles are good enough for near-term (2-3 years) deployment in areas such as buses and taxis. Therefore, high-performance offerings for these applications have a market. Otherwise, given the automotive supplier adoption cycles, vendors need to have consumer-compliant price and form factor today to have financial returns in reasonable time.