Skip to content

stoh

Inference Optimization using TensorRT

  • by

The final stage of deep-learning development process is deploying your model at a specific target platform. In real-world applications, the deployed model is required to execute inferences in realtime or higher speed, and the target platform might be very resource-limited, for example, embedded system such as automotive or robot platforms.… Read More »Inference Optimization using TensorRT