Deploying and Scaling AI Applications with the NVIDIA TensorRT Inference Server on Kubernetes

Curated as Kubernetes, NGINX and Open Source.

Watch this talk on YouTube.

September 2019

Deploying and Scaling AI Applications with the NVIDIA TensorRT Inference Server on Kubernetes

The open source NVIDIA TensorRT Inference Server is production‑ready software that simplifies deployment of AI models for speech recognition, natural languag...

  • Kubernetes
  • NGINX
  • Open Source

Share:

  • Twitter
  • Facebook
  • LinkedIn

Related Talks