GPU transcoding with Emby / Plex using docker-nvidia

GPU transcoding with Emby / Plex using docker-nvidia

What’s the big deal about accessing a GPU in a docker container?

Normally, passing a GPU to a container would be a hard ask of Docker - you’d need to:

  1. Figure out a way to pass through a GPU device to the container,
  2. Have the ( quite large ) GPU drivers installed within the container image and kept up-to-date, and you’d
  3. Loose access to the GPU from the host platform as soon as you launched the docker container.

Fortunately, if you have an NVIDIA GPU, this is all taken care of with the docker-nvidia package, maintained and supported by NVIDIA themselves.

There’s a detailed introduction on the NVIDIA Developer Blog, but to summarize, nvidia-docker is a wrapper around docker, which ( when launched with the appropriate ENV variable! ) will pass the necessary devices and driver files from the docker host to the container, meaning that without any further adjustment , container images like emby/emby-server have full access to your host’s GPU(s) for transcoding!

How do I enable GPU transcoding with Emby / Plex under docker?

If you want to learn - read the NVIDIA Developer Blog entry.

If you just want the answer, follow this process:

  1. Install the latest NVIDIA drivers for your system
  2. Have a supported version of Docker
  3. Install nvidia-docker2 (below)

Ubuntu

# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update

# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

# Test nvidia-smi with the latest official CUDA image
docker run -d --runtime=nvidia --rm nvidia/cuda nvidia-smi

RedHat / CentOS

# If you have nvidia-docker 1.0 installed: we need to remove it and all existing GPU containers
docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
sudo yum remove nvidia-docker

# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | \
  sudo tee /etc/yum.repos.d/nvidia-docker.repo

# Install nvidia-docker2 and reload the Docker daemon configuration
sudo yum install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

# Test nvidia-smi with the latest official CUDA image
docker run -d --runtime=nvidia --rm nvidia/cuda nvidia-smi

Make nvidia-docker the default runtime

You could stop here, and manage your containers using the nvidia-docker runtime. However, I like to edit /etc/docker/daemon.json , and force nvidia-docker to be used by default , by adding:

"default-runtime": "nvidia",

And then restarting docker with sudo pkill -SIGHUP dockerd

Launching a container with docker-nvidia GPU support

Even with the default nvidia runtime, the magic GPU support doesn’t happen unless you launch a container with the NVIDIA_VISIBLE_DEVICES=all environment variable set. ( Thanks to @flx42 for clarifying this for me )

The advantage to adding the default-runtime argument above, is that you can now deploy your Emby/Plex/Whatever app under swarm exactly as usual, but gain all the benefits of having your GPU available to your app!

1 Like
      NVIDIA_VISIBLE_DEVICES: 'all'
      NVIDIA_DRIVER_CAPABILITIES: 'all'

must be added under Local → Containers → Plex/Emby → Duplicate/edit >

ENV →
2 new entries →

add the lines on top

scroll down →

Deploy the Container

well done

1 Like