GPU Coder
GPU Coder™ generates optimized CUDA® code from MATLAB® code and Simulink® models. The generated code includes CUDA kernels for parallelizable parts of your deep learning, embedded vision, and radar and signal processing algorithms. For high performance, the generated code can call NVIDIA® TensorRT™. You can integrate the generated CUDA into your project as source code or static/dynamic libraries and compile it for modern NVIDIA GPUs, including those embedded on NVIDIA Jetson™ and NVIDIA DRIVE® platforms. You can access peripherals on the Jetson and DRIVE platforms and incorporate manually written CUDA into the generated code.
GPU Coder enables you to profile the generated CUDA to identify bottlenecks and opportunities for performance optimization (with Embedded Coder®). Bidirectional links let you trace between MATLAB code and generated CUDA. You can verify the numerical behavior of the generated code via software-in-the-loop (SIL) and processor-in-the-loop (PIL) testing.
Get Started
Learn the basics of GPU Coder
MATLAB Algorithm Design for GPU
MATLAB language syntax and functions for code generation
Kernel Creation
Algorithm structures and patterns that create CUDA GPU kernels
Performance
Troubleshoot code generation issues, improve code execution time, and reduce memory usage of generated code
Deep Learning with GPU Coder
Generate CUDA code for deep learning neural networks
Deployment
Deploy generated code to NVIDIA Tegra® hardware targets
GPU Coder Supported Hardware
Support for third-party hardware, such as NVIDIA Drive and Jetson platforms