PAR Class 7, Thu 2020-02-06
Table of contents
1 Docker on parallel
-
I've installed docker, a popular lightweight virtualization system, on parallel, because Nvidia uses it to distribute SW.
-
Docker runs images that define virtual machines.
-
Docker images share resources with the host, in a controlled manner.
-
You can install private copies of images, or to see what images I've installed, do: docker images .
-
Run the hello-world image thus: docker run hello-world
-
Here's a more complicated example:
docker run -it --mount type=bind,source=/parallel-class,destination=/parallel-class --mount type=bind,source=$HOME,destination=/home --gpus=all nvidia/cuda:10.1-devel
This interactively runs a virtual machine with
- Nvidia's CUDA development tools
- access to parallel's GPUs
- access to parallel's /parallel-class, mounted locally at /parallel-class .
- access to your home dir, mounted at /home.
-
E.g., go into /parallel-class/openmp/rpi and run some programs.
-
Copy some .cc files to your home dir, compile, and run them.
-
There are ways to make the image's contents persistent. E.g., you can customize and save an image.
-
For simple images, which is not nvidia/cuda, starting the image is so cheap that you can do it to run one command, and encapsulate the whole process in a shell function. More later. However, e.g.,
docker run --gpus=all nvidia/cuda:10.1-devel nvidia-smi
-
parallel has CUDA sample programs in /local/cuda/samples. To make them available in a docker image, include -v local/cuda/samples:/samples
2 Nvidia GPU and accelated computing, ctd.
Continuing /parallel-class/GPU-Teaching-Kit-Accelerated-Computing at Module 3.