Using GPU Coder to Prototype and Deploy on NVIDIA Drive, Jetson
Learn how you can use GPU Coder hardware support package for NVIDIA® GPUs to prototype, verify, and deploy your deep learning models and algorithms in MATLAB® for embedded vision, autonomous driving applications on NVIDIA GPUs like the NVIDIA Drive, and Jetson platforms. You can prototype and verify your algorithms using live data from the sensors connected to NVIDIA Drive or Jetson platforms in MATLAB. You can also run hardware-in-the-loop tests with your validation data in MATLAB. Finally, you can cross-compile and deploy your application to the NVIDIA GPUs.
Published: 16 Oct 2019
GPU coder generates portable and optimized CUDA code for your complete, deep learning algorithm in MATLAB, which includes the preprocessing and postprocessing application logic along with the trained neural network.
Using the GPU coder hardware support package for NVIDIA GPUs use, you can build and deploy your algorithms directly from MATLAB to NVIDIA GPUs, suggest the NVIDIA drive and Jetson platforms directly from MATLAB.
Here we have the semantic segmentation algorithm deployed on a drive PX2. And similarly, on the Jetson Xavier, we have the semantic segmentation application running.
Once you have built your deep-learning algorithm in MATLAB, the hardware support package lets us prototype your algorithm using live data from the hardware. And you can test the robustness of your algorithm on your workstation before deploying to the target.
For instance, we have a deep-learning algorithm built around a trained VGG network for semantic segmentation as an example here in MATLAB. And it works well on my test image input.
Now, using these APIs that are provided by the support package, I can connect to the NVIDIA drive board, read the input from the camera sensor connected to the board, and run the inference in MATLAB. We have a dry PX2 in one of our labs here, and we have the camera pointed out of the window overlooking some foliage here in New England.
And you can see that the algorithm works on the live data. There are some artifacts, like clouds and the construction, which are not part of the training data. So I can iterate and update the algorithm to improve its robustness.
The next step would be to generate code from the algorithm using the cod generation APIS as shown here. You can build and deploy your application to the target GPU both on a Windows or Linux machine using these APIs. And the generated code includes the interfaces to the camera and the display on the drive.
Here is the semantic segmentation application compiled from the generated code that we can launch as a standalone application on the drive PX2. Following a similar workflow and changing just a couple of options, we have also deployed the same algorithm on the Jetson Xavier board as was shown earlier.
To learn more, refer to the GPU coder resources link below, and you can try this example by downloading the support package from the add-on gallery.
Featured Product
GPU Coder
Up Next:
Related Videos:
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)