Skip to main content

PAR Class 7, Thu 2020-02-06

1   Docker on parallel

  1. I've installed docker, a popular lightweight virtualization system, on parallel, because Nvidia uses it to distribute SW.

  2. Docker runs images that define virtual machines.

  3. Docker images share resources with the host, in a controlled manner.

  4. You can install private copies of images, or to see what images I've installed, do: docker images .

  5. Run the hello-world image thus: docker run hello-world

  6. Here's a more complicated example:

    docker run -it --mount type=bind,source=/parallel-class,destination=/parallel-class --mount type=bind,source=$HOME,destination=/home --gpus=all nvidia/cuda:10.1-devel

    This interactively runs a virtual machine with

    1. Nvidia's CUDA development tools
    2. access to parallel's GPUs
    3. access to parallel's /parallel-class, mounted locally at /parallel-class .
    4. access to your home dir, mounted at /home.
  7. E.g., go into /parallel-class/openmp/rpi and run some programs.

  8. Copy some .cc files to your home dir, compile, and run them.

  9. There are ways to make the image's contents persistent. E.g., you can customize and save an image.

  10. For simple images, which is not nvidia/cuda, starting the image is so cheap that you can do it to run one command, and encapsulate the whole process in a shell function. More later. However, e.g.,

    docker run --gpus=all nvidia/cuda:10.1-devel nvidia-smi

  11. parallel has CUDA sample programs in /local/cuda/samples. To make them available in a docker image, include -v local/cuda/samples:/samples

2   Nvidia GPU and accelated computing, ctd.

Continuing /parallel-class/GPU-Teaching-Kit-Accelerated-Computing at Module 3.