Technologies about TV, Video, Streaming, UI and Hardware

GTC 2017: AI, Autonomous Car and Beyond – Part 1: Vision

Few weeks before GPU Technology Conference (GTC) 2017, I hesitated about attending the conference because of the high price tag on the ticket. Finally, I decided to attend GTC 2017 at San Jose McEnery Convention Center from May 8 to May 11. After the conference, I concluded it is worth every penny for attending the conference as it demonstrated the latest and greatest technologies on artificial intelligence, autonomous vehicles, deep learning algorithms and how other giant companies like Amazon, Facebook and Google are leveraging NVIDIA’s strength on building amazing software on the GPU in both data center and embedded devices.

What I would like to cover in this blog post including the following in 4 parts:

  • NVIDIA’s Vision on GPU and its future, from Jensen Huang keynote speech
  • Artificial intelligence is everywhere
  • Autonomous Vehicle is approaching
  • Artificial intelligence on the edge

I will try to copy these topics in 4 different blog posts so please be patient. I am sorry for some poor quality pictures since the situation may not allow for picture perfect images.

NVIDIA’s Vision on GPU and its future

Here is a snapshot on people waiting for Jensen Huang, NVIDIA’s visionary founder and CEO, before his keynote speech.

GTC2017 Keynote

Jensen Huang’s keynote at GTC2017

 

In the beginning, Jensen started with the micro-processor history and how Moore’s Law evolved. Of course, NVIDIA is still a semiconductor company in its blood. Then he explained how GPU can extend the process as people said Moore’s Law squared.

How GPU extend Moore’s Law

 

Here is a quick demo on how GPU can help on extremely computational demanding tasks like ray tracking or transfer the styles from two existing photos and generate a new photo with the style from the two.

Deep Learning for Ray Tracking

 

Deep Learning for Style Transfer

New Product Announcement

Of course, NVIDIA has to come up with some new product announcement in this conference. They include NVIDIA Tesla Volta V100, which is made on TSMC’s 12nm FinFET process, has 5,120 CUDA cores and 120 TeraFLOPS of performance. The R&D expense on V100 was about $3 billion, Jensen said. This Volta core includes a new tensor core which is a 4×4 matrix array and fully optimized for deep learning. Jensen claimed it’s 12x tensor operation and 6x capability for inferencing then their previous generation Pascal.

Jensen also introduced the new DGX-1 with eight Tesla V100, it’s titled “Essential Instrument of AI Research”. It offers 960 TeraFLOPS but cost a whopping $149,000. NVIDIA’s also offer a small DGX Station called Personal DGX, with 4 Tesla V100 offering 480 tensor TFLOPS for startups, the price tag is $69,000.

Jensen also announced NVIDIA GPU Cloud, which is a GPU-accelerated cloud platform that’s optimizing for deep learning. It allows engineers to easily start a deep learning process on a containerized image called NVDocker.

Lastly, NVIDIA also announced to open source its Xavier DLA – Deep Learning Accelerator, source codes.

The Future of Transportation

Transportation is definitely one of the hottest areas that we can use GPU technologies to help on solving lots of problems on driving safety, energy efficiency, parking, pollution reduction, etc. NVIDIA released a Drive PX platform targeted for delivering level 4 or 5 on the autonomous vehicle defined by NHTSA. There are lots of discussion in the conference session about self-driving trucks, 3D maps, augmented reality, that I would like to discuss on my blog post later. Of course, one of the biggest announcement in the keynote is that Toyota selects NVIDIA’s Drive PX platform for their autonomous vehicle development.

Isaac Robot Simulator

Jensen compared the autonomous vehicles and robots in a way of collision. Autonomous vehicles are designed to prevent collision. But robots are designed to collide in the right way, like assembling parts together in the factory, or simply training a robot to play hockey. It’s very difficult by nature to train a robot to do things like human sports that they need to coordinate among multiple senses and make a collision perfectly. That’s why NVIDIA created a robot simulator that you can train your robot virtually without building a robot first. You can also create tons of virtual robots that you can train them all at the same time and use the best performed one and clone the best trained model to all the others to accelerate the training process. Sound like we are closer to the environment that’s depicted in the Star War!

 

Isaac Robot Simulator

 

This is how Issac learn to play golf.

Isaac playing golf

 

In The End

In the end, Jensen summarized all their amazing GPU architecture and all top software companies are using their GPU technologies in the cloud. As he said all of world’s top 15 tech companies and all of the top 10 auto makers are all gathering here at GTC. That’s a pretty amazing keynote in today’s technology world. I believe this is not the end, but just a beginning of the artificial intelligence and all its applications that we can never imagine before!!

Keynote Ending


Also published on Medium.

  1. Reply

    Thanks to my father who stated to me about this blog, this website is really awesome.

  2. Reply

    I do agree with all of the ideas you have presented on your post. They are really convincing and can certainly work. Nonetheless, the posts are very quick for starters. May just you please extend them a bit from subsequent time? Thank you for the post.

  3. Reply

    Pretty! This has been a really wonderful article. Many thanks for providing these details.

  4. Reply

    Hi, always i used to check weblog posts here in the early hours in the dawn, as i love to learn more and more.

Leave a Reply

Your email address will not be published.