Technologies about TV, Video, Streaming, UI and Hardware

GTC 2017 Part 2: Autonomous Vehicles Are Approaching

Following my Part 1 post on GTC 2017, I would like to share about how NVIDIA or its partners are working hard on autonomous vehicles.

 

NVIDIA has announced its Drive PX 2 platform, which can be nicely mounted in the trunk of a car, pre-wired for cameras and other sensors. It is targeted for autonomous Level 3 and Level 4 development today. The hardware interface connected to PX 2 include camera, Lidar, radar, DBW, CAN bus, IMU, GPUS and V2X modules.

The platform come with DriveWorks, an autonomous driving SDK built on top of CUDA, CuDNN, Tensor RT and NVMedia for all the graphics, images, deep learning and multimedia support working with cameras, sensors, and of course, NVIDIA GPU. It includes Sensor Abstraction Layer (SAL) API for a common and simple unified interface to all sensors including camera. It allows developers to do raw sensor serialization and work with other hardware components such as CUDA, GL engine and H.264/H.265 codec easily. DriveWorks also includes the latest CUDA 8 with TensorRT, a high-performance deep learning inference optimizer and runtime for deep learning applications. Developers can use TensorRT to deliver fast inference using INT8 or FP16 optimized precision that significantly reduces latency for the real-time applications such as object detection in the autonomous vehicles. DriveWorks also contains NVMedia library that includes API to capture image from camera, processing and encoding video signals, etc.

The most amazing part of the DriveWorks is the built-in perception deep neural networks, the DriveNet, which is used for multi-class object detections for cars, trucks, pedestrian, traffic signs or any other objects on the roads, the LaneNet, which is used for lane detection, and the OpenRoadNet, to detect if there is free space around. That means, you don’t need to train your system to detect all the objects on the road from scratch, they are all built-in to the Drive PX platform.

There are several companies demonstrated how they use NVIDIA’s Drive Platform to develop autonomous vehicles. They are Ford, Mercedes, Elektrobit, AutonomouStuff, Luxoft, and fka.

Ford (Argo.ai) team has demonstrated of using deep learning on Ford’s autonomous vehicles. They basically use the deep neural network as the brain to estimate the distance of an object using the visual information from two cameras for stereo matching. They proposed a deep learning network with cross correlation layer in order to estimate the closer objects more accurately. Then apply RNN to remember the location and predict the location of the moving objects.

Mercedes has presented on how to use AI in the edge for intelligent user experience. The idea is to use AI to decide when and how to automate the controls such as temperature, entertainment, windows, etc. The AI decision has to perform in the edge instead of the cloud without dealing with latency and network connectivity issue. They optimized the deep neural network by considering drop out nodes in order to perform AI automation with limited processing power and memory in the edge. They finished the talk with what Dr. Dieter Zetsche, the Chairman of the Board of Directors of Daimler AG and Head of Mercedes-Benz Cars, said “We’re working on a new generation of vehicles that truly serve as digital companions. They…learn your habits,…adapt to your choices,…predict your moves,… and interact with your social network.” Looks like we have more to expect from Mercedes.

AI-assisted user experience from Mercedes

 

Luxoft presented how they use computer vision to bring AR (Augmented Reality) in to the vehicles. There are lots of challenging such as usability, hardware limitation, navigation data dependency, latency, etc. In order to create an AR view projected on wind shield to display all the information from vehicle sensors, map data, telematics, or navigation guidance, they create their unique solution with data fusion techniques. Their solution for road scene recognition and object tracking includes road boundaries, lane detection, vehicle detection, distance and time to collision estimation, road signs and parking slot  recognition. They also provide precise position and integration with all kinds of sensor data including V2I and V2V telematics. Then everything needs to blend nicely in natural augmented reality display. Of course, they are using deep learning network running on NVIDIA TK1 for the object, lane detection and scene semantic segmentation from the camera images. Then the AR rendering part has to integrate into their hardware platform as a complete solution, which seems to be quite lots of works.

Luxoft CV/AR solution

 

AutonomouStuff (AS) demonstrated how you can build an autonomous car (any car) in 5 steps with their R&D platform. As they claimed from their web site, they are “world’s leader in supplying components, engineering services and software that enable autonomy.” Let’s see what we can do in 5 steps.

Building L4 autonomous vehicle in 5 steps

 

Step 0 is easy, just decide which applications, such as shuttle, passenger vehicle, racing car, truck or any special purpose vehicle. Step 1 is to decide your vehicle platform like automotive, NEV, off-road or anything else. The key of vehicle selection is the capability to drive-by-wire. That’s why they choose Lincoln MKZ as their demo platform since this capability is built-in. Of course, they are also using NVIDIA’s DRIVE PX2 for the L3 and L4 deployment. The Step 2 is to integrate the perception/positioning devices including radar, LiDAR, camera, ultra-sonic for sensing and the GPS/IMU/RTK for positioning. Of course, they were presenting their platform with NVIDIA Drive PX 2 with DriveWorks. But you do have options on decide what kind of sensors to include in the perception kits. All the sensors can professionally installed with PX 2 mounted in the trunk. You can install up to 11 cameras, 6 long range radar from Continental, 1 LiDAR in front bumper and 2 on the root. Step 3 is data fusion, which completely reply on DriveWorks for data acquisition, surrounding detection with NVIDIA perception DNN network, DriveNet, LaneNet and OpelRoadNet. The Step 4 is the automation algorithms, which you can apply your own algorithms to fit your particular environment for your vehicle to be. It sounds easy to have your own autonomous vehicle in 5 steps, but I think it’s just the beginning. The most difficult part is to get your vehicle on the road, and survive!

It’s not a joke, AS does have a Lincoln in the conference center lobby!

Integrated in the dashboard

 

PX 2 in the trunk

 

Another interesting talk in GTC is from Forschungsgesellschaft Kraftfahrwesen mbH AachenLet’s just called it fka. They presented Automated Truck Driving and Platooning with Drive PX 2. For those people not familiar with truck platooning, here is the definition: “Truck Platooning comprises a number of trucks equipped with state-of-the-art driving support systems – one closely following the other. This forms a platoon with the trucks driven by smart technology, and mutually communicating. Truck platooning is innovative and full of promise and potential for the transport sector.”, thanks to www.eutruckplatooning.com. The motivation of the trunk platooning is based on the fact that “too small gaps are the main reason for truck accidents in Germany!” The goal of the platooning can be explained below. That includes safety improvement, relieve and support for the drivers, improved road space, traffic flow optimization and reduction of fuel consumption.

The goals of platooning system

 

The following slide demonstrated how automated driving is so different from the small cars. The automated driving basically involves perception, localization, motion planning and actuation. As we can expect the trajectory planning is much more difficult for the truck because of its size.

Trajectory Planning for Truck Automated Driving

 

It sounds dangerous to allow machine to drive the truck automatically in the beginning. But if we can make it safer than human driving, we should be able to improve the energy efficiency, traffic efficiency and cost saving on staff and fuel. It’s not a bad idea at all. It’s no joke, there is actually a truck in the show floor with PX 2!

Truck with PX 2 in the show floor

 

For those who are interested in the autonomous vehicle technologies, you can learn from NVIDIA has Deep Learning Institute or Udacity, which provide nanodegree that you can put on your resume.

If you are able to read up to here, thank you and I hope you enjoy the article!


Also published on Medium.

Leave a Reply

Your email address will not be published.