Skip to content

Releases: NVIDIA/nvidia-container-toolkit

v1.12.0-rc.4

02 Feb 10:34
Compare
Choose a tag to compare
v1.12.0-rc.4 Pre-release
Pre-release
  • Generate a minimum CDI spec version for improved compatibility.
  • Add --device-name-strategy [index | uuid | type-index] options to the nvidia-ctk cdi generate command that can be used to control how device names are constructed.
  • Set default for CDI device name generation to index to generate device names such as nvidia.com/gpu=0 or nvidia.com/gpu=1:0 by default. NOTE: This is a breaking change and will cause a v0.5.0 CDI specification to be generated. To keep the previous generate a v0.4.0 CDI specification with nvidia.com/gpu=gpu0 or nvidia.com/gpu=mig1:0 device names use the type-index option.
  • Ensure that nivdia-container-toolkit package can be upgraded to from versions older than v1.11.0 on RPM-based systems.

v1.12.0-rc.3

02 Feb 10:31
Compare
Choose a tag to compare
v1.12.0-rc.3 Pre-release
Pre-release
  • Don't fail if by-path symlinks for DRM devices do not exist
  • Replace the --json flag with a --format [json|yaml] flag for the nvidia-ctk cdi generate command
  • Ensure that the CDI output folder is created if required
  • When generating a CDI specification use a blank host path for devices to ensure compatibility with the v0.4.0 CDI specification
  • Add injection of Wayland JSON files
  • Add GSP firmware paths to generated CDI specification
  • Add --root flag to nvidia-ctk cdi generate command to allow for a non-standard driver root to be specified

v1.12.0-rc.2

22 Nov 13:07
Compare
Choose a tag to compare
v1.12.0-rc.2 Pre-release
Pre-release
  • Update golang version to 1.18
  • Inject Direct Rendering Manager (DRM) devices into a container using the NVIDIA Container Runtime
  • Improve logging of errors from the NVIDIA Container Runtime
  • Improve CDI specification generation to support rootless podman
  • Use nvidia-ctk cdi generate to generate CDI specifications instead of nvidia-ctk info generate-cdi

Changes from libnvidia-container v1.12.0-rc.2

  • Skip creation of existing files when mounting them from the host

v1.12.0-rc.1

10 Oct 15:10
Compare
Choose a tag to compare
v1.12.0-rc.1 Pre-release
Pre-release
  • Improve injection of Vulkan configurations and libraries
  • Add nvidia-ctk info generate-cdi command to generated CDI specification for available devices

Changes for the container-toolkit container

  • Update CUDA base images to 11.8.0

Changes from libnvidia-container v1.12.0-rc.1

  • Add NVVM Compiler Library (libnvidia-nvvm.so) to list of compute libraries

v1.11.0

14 Sep 14:43
Compare
Choose a tag to compare

This is a promotion of the v1.11.0-rc.3 release to GA.

This release of the NVIDIA Container Toolkit v1.11.0 is primarily targeted at adding support for injection of GPUDirect Storage and MOFED devices into containerized environments.

NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:

NOTE: This release does not include an update to nvidia-docker2 and is compatible with nvidia-docker2 2.11.0.

The packages for this release are published to the libnvidia-container package repositories.

1.11.0-rc.3

  • Build fedora35 packages
  • Introduce an nvidia-container-toolkit-base package for better dependency management
  • Fix removal of nvidia-container-runtime-hook on RPM-based systems
  • Inject platform files into container on Tegra-based systems

NOTE: When upgrading from(or downgrading to) another 1.11.0-rc.* version it may be required to remove the nvidia-container-toolkit or nvidia-container-toolkit-base package(s) manually. This is due to the introduction of the nvidia-container-toolkit-base package which now provides the configuration file for the NVIDIA Container Toolkit. Upgrades from or downgrades to older versions of the NVIDIA Container Toolkit (i.e. <= 1.10.0) should work as expected.

Changes for the container-toolkit container

  • Update CUDA base images to 11.7.1
  • Fix bug in setting of toolkit accept-nvidia-visible-devices-* config options introduced in v1.11.0-rc.2.

Changes from libnvidia-container v1.11.0-rc.3

  • Preload libgcc_s.so.1 on arm64 systems

1.11.0-rc.2

Changes for the container-toolkit container

  • Allow accept-nvidia-visible-devices-* config options to be set by toolkit container

Changes from libnvidia-container v1.11.0-rc.2

  • Fix bug where LDCache was not updated when the --no-pivot-root option was specified

1.11.0-rc.1

  • Add cdi mode to NVIDIA Container Runtime
  • Add discovery of GPUDirect Storage (nvidia-fs*) devices if the NVIDIA_GDS environment variable of the container is set to enabled
  • Add discovery of MOFED Infiniband devices if the NVIDIA_MOFED environment variable of the container is set to enabled
  • Fix bug in CSV mode where libraries listed as sym entries in mount specification are not added to the LDCache.
  • Rename nvidia-contianer-toolkit executable to nvidia-container-runtime-hook and create nvidia-container-toolkit as a symlink to nvidia-container-runtime-hook instead.
  • Add nvidia-ctk runtime configure command to configure the Docker config file (e.g. /etc/docker/daemon.json) for use with the NVIDIA Container Runtime.

v1.11.0-rc.3

25 Aug 12:11
Compare
Choose a tag to compare
v1.11.0-rc.3 Pre-release
Pre-release
  • Build fedora35 packages
  • Introduce an nvidia-container-toolkit-base package for better dependency management
  • Fix removal of nvidia-container-runtime-hook on RPM-based systems
  • Inject platform files into container on Tegra-based systems

NOTE: When upgrading from(or downgrading to) another 1.11.0-rc.* version it may be required to remove the nvidia-container-toolkit or nvidia-container-toolkit-base package(s) manually. This is due to the introduction of the nvidia-container-toolkit-base package which now provides the configuration file for the NVIDIA Container Toolkit. Upgrades from or downgrades to older versions of the NVIDIA Container Toolkit (i.e. <= 1.10.0) should work as expected.

Changes for the container-toolkit container

  • Update CUDA base images to 11.7.1
  • Fix bug in setting of toolkit accept-nvidia-visible-devices-* config options introduced in v1.11.0-rc.2.

Changes from libnvidia-container v1.11.0-rc.3

  • Preload libgcc_s.so.1 on arm64 systems

v1.11.0-rc.2

27 Jul 14:52
Compare
Choose a tag to compare
v1.11.0-rc.2 Pre-release
Pre-release

Changes for the container-toolkit container

  • Allow accept-nvidia-visible-devices-* config options to be set by toolkit container

Changes from libnvidia-container v1.11.0-rc.2

  • Fix bug where LDCache was not updated when the --no-pivot-root option was specified

v1.11.0-rc.1

20 Jul 15:39
Compare
Choose a tag to compare
v1.11.0-rc.1 Pre-release
Pre-release
  • Add cdi mode to NVIDIA Container Runtime
  • Add discovery of GPUDirect Storage (nvidia-fs*) devices if the NVIDIA_GDS environment variable of the container is set to enabled
  • Add discovery of MOFED Infiniband devices if the NVIDIA_MOFED environment variable of the container is set to enabled
  • Fix bug in CSV mode where libraries listed as sym entries in mount specification are not added to the LDCache.
  • Rename nvidia-contianer-toolkit executable to nvidia-container-runtime-hook and create nvidia-container-toolkit as a symlink to nvidia-container-runtime-hook instead.
  • Add nvidia-ctk runtime configure command to configure the Docker config file (e.g. /etc/docker/daemon.json) for use with the NVIDIA Container Runtime.

v1.10.0

13 Jun 14:50
Compare
Choose a tag to compare

This is a promotion of the v1.10.0-rc.3 release to GA.

This release of the NVIDIA Container Toolkit v1.10.0 is primarily targeted at improving support for Tegra-based systems.
It sees the introduction of a new mode of operation for the NVIDIA Container Runtime that makes modifications to the incoming OCI runtime
specification directly instead of relying on the NVIDIA Container CLI.

NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:

The packages for this release are published to the libnvidia-container package repositories.

  • Update config files to include default settings for nvidia-container-runtime.mode and nvidia-container-runtime.runtimes
  • Update container-toolkit base image to CUDA 11.7.0
  • Switch to ubuntu20.04 for default container-toolkit image
  • Stop publishing all centos8 and arm64 ubuntu18.04 container-toolkit images

1.10.0-rc.3

  • Use default config instead of raising an error if config file cannot be found
  • Ignore NVIDIA_REQUIRE_JETPACK* environment variables for requirement checks
  • Fix bug in detection of Tegra systems where /sys/devices/soc0/family is ignored
  • Fix bug where links to devices were detected as devices

Changes for the container-toolkit container

  • Fix bug where runtime binary path was misconfigured for containerd when using v1 of the config file

Changes from libnvidia-container v1.10.0-rc.3

  • Fix bug introduced when adding libcudadebugger.so to list of libraries in v1.10.0-rc.2

1.10.0-rc.2

  • Add support for NVIDIA_REQUIRE_* checks for cuda version and arch to csv mode
  • Switch to debug logging to reduce log verbosity
  • Support logging to logs requested in command line
  • Fix bug when launching containers with relative root path (e.g. using containerd)
  • Allow low-level runtime path to be set explicitly as nvidia-container-runtime.runtimes option
  • Fix failure to locate low-level runtime if PATH envvar is unset
  • Replace experimental option for NVIDIA Container Runtime with nvidia-container-runtime.mode = "csv" option
  • Use csv as default mode on Tegra systems without NVML
  • Add --version flag to all CLIs

Changes from libnvidia-container v1.10.0-rc.2

  • Bump libtirpc to 1.3.2 (libnvidia-container#168)
  • Fix bug when running host ldconfig using glibc compiled with a non-standard prefix
  • Add libcudadebugger.so to list of compute libraries

1.10.0-rc.1

  • Add nvidia-container-runtime.log-level config option to control the level of logging in the NVIDIA Container Runtime
  • Add nvidia-container-runtime.experimental config option that allows for experimental features to be enabled.
  • Add nvidia-container-runtime.discover-mode to control how modifications are applied to the incoming OCI runtime specification in experimental mode
  • Add support for the direct modification of the incoming OCI specification to the NVIDIA Container Runtime; this is targeted at Tegra-based systems with CSV-file based mount specifications.

Changes from libnvidia-container v1.10.0-rc.1

  • [WSL2] Fix segmentation fault on WSL2s system with no adapters present (e.g. /dev/dxg missing)
  • Ignore pending MIG mode when checking if a device is MIG enabled
  • [WSL2] Fix bug where /dev/dxg is not mounted when NVIDIA_DRIVER_CAPABILITIES does not include "compute"

v1.10.0-rc.3

30 May 10:14
Compare
Choose a tag to compare
v1.10.0-rc.3 Pre-release
Pre-release
  • Use default config instead of raising an error if config file cannot be found
  • Ignore NVIDIA_REQUIRE_JETPACK* environment variables for requirement checks
  • Fix bug in detection of Tegra systems where /sys/devices/soc0/family is ignored
  • Fix bug where links to devices were detected as devices

Changes for the container-toolkit container

  • Fix bug where runtime binary path was misconfigured for containerd when using v1 of the config file

Changes from libnvidia-container v1.10.0-rc.3

  • Fix bug introduced when adding libcudadebugger.so to list of libraries in v1.10.0-rc.2