W Randolph Franklin home page
... (old version)
ParallelComputingSpring2014/ home page Login


Week 1

Thurs 2014-01-23 (L1)

General

  1. The syllabus is on the course home page.
  2. My local cache of material from the web is here. (Aside: It's not true that material on the web is permanent. A number of sites that I've used in classes have vanished. For some of them, the only online version is my cache, which now pops up in google searches.)
  3. Although I generally recommend clang++ over g++, for this course, you need to use g++. OpenMP is not yet in the clang trunk, and nvcc assumes g++.
  4. Homework 1 is online, due Monday (it's short).

OpenMP

The first 3 lectures will be on OpenMP (and an intro to parallel computation). There are several good online references.

  1. http://en.wikipedia.org/wiki/OpenMP good summary.
  2. A series of video tutorials with slides by Mattson @ Intel:
    1. http://openmp.org/wp/2013/12/tutorial-introduction-to-openmp/
    2. https://www.youtube.com/playlist?list=PLLX-Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG
    The videos are slow moving but the slides are good. Some of the general slides are old, but the principles are still valid. My local cache of the slides and exercises is here.
  3. Official site, with specs etc: http://openmp.org/
  4. https://computing.llnl.gov/tutorials/openMP/ - detailed tutorial with many examples. LLNL also has a good parallel tutorial: https://computing.llnl.gov/tutorials/parallel_comp/
  5. https://sharepoint.campus.rwth-aachen.de/units/rz/HPC/public/Shared%20Documents/Forms/ParProg%202013.aspx also looks quite good, and has exercises.

Week 2

Mon 2014-01-27 (L2)

Top 500 machines

Mentioned last Thursday: http://top500.org/

SSH to my lab computer

Before using my lab computer, you need to send me:

  1. your name (unless it's in your email).
  2. your RCS username (unless you emailed using it).
  3. your SSH public key (and also you install your private key on your computer). I'll let you research what this means and how to create it.

Then you access it thus: ssh geoxeon.ecse.rpi.edu .

LMS

Why do I use it? It's widely used at RPI. It's good for logging in homeworks and handing back grades. However it's pretty bad at most everything else.

OpenMP

  1. The class is encouraged to use their laptops to run these programs with me.
  2. Look at the summary card: OpenMP3.1-CCard.pdf IMO you won't use most of it.
  3. cat /proc/cpuinfo shows CPU cores.
  4. Work with the program Gfiles:openmp/mattson/OMP_exercises/pi.c
    1. Run it as is.
    2. Add a parallel pragma.
    3. Add a single to announce the loop.
    4. Run with 1 thread, then 2, 4.
      1. Look at the time and result.
    5. Put a critical around sum and rerun.
    6. Try a reduction pragma.
    7. Why is the result wrong?
    8. Fix it.
    9. Try on geoxeon with up to 32 threads.
    10. Why is it not 32x faster?
    11. Use a critical then a capture pragma to get a unique number for each iteration, and compare times.
  5. Gfiles:openmp/mattson/OMP_exercises/matmul.c
    1. How to parallelize?
    2. Test.

Thurs 2014-01-30 (L3)

  1. Exercises in https://computing.llnl.gov/tutorials/openMP/exercise.html . Also the description of OpenMP is excellent.
  2. Some programs in http://people.sc.fsu.edu/~%20jburkardt/c_src/openmp/openmp.html
    1. http://people.sc.fsu.edu/~%20jburkardt/c_src/md_openmp/md_openmp.html - for
    2. http://people.sc.fsu.edu/~%20jburkardt/c_src/fft_openmp/fft_openmp.c - for
    3. http://people.sc.fsu.edu/~%20jburkardt/c_src/multitask_openmp/multitask_openmp.html - shows sections
    4. http://people.sc.fsu.edu/~%20jburkardt/cpp_src/dijkstra_openmp/dijkstra_openmp.cpp - shows single, critical, barrier
    5. http://people.sc.fsu.edu/~%20jburkardt/c_src/heated_plate_openmp/heated_plate_openmp.c
  3. http://sc.tamu.edu/shortcourses/SC-openmp/OpenMPSlides_tamu_sc.pdf - another nice tutorial
  4. http://stackoverflow.com/questions/13065943/task-based-programming-pragma-omp-task-versus-pragma-omp-parallel-for/13068512#13068512 - tasks
  5. Homework 2, due next Thurs.

Week 3

Mon 2014-02-03 (L4)

  1. OpenMP sections - my example is sections.cc with some variants.
    1. Note my cool way to print an expression's name followed by its value.
    2. Since I/O is not thread-safe, it has to be protected by a critical and also followed by a barrier.
    3. Note the 3 required levels of pragmas: parallel, sections, section.
    4. The assignment of sections to threads is nondeterministic.
    5. IMNSHO, OpenMP considerably easier than pthreads, fork, etc.
  2. OpenMP tasks
    1. While inside a pragma parallel, you queue up lots of tasks - they're executed as threads are available.
    2. Here's one reasonable ref: https://iwomp.zih.tu-dresden.de/downloads/omp30-tasks.pdf
    3. My example is tasks.cc with some variants.
    4. taskfib.cc is modified from an example in the OpenMP spec.
      1. It shows how tasks can recursively spawn more tasks.
      2. This is only an example; you would never implement fibonacci that way. (The fastest way is the following closed form. In spite of the sqrts, the expression always gives an integer. )
        {$$ F_n = \frac{\left( \frac{1+\sqrt{5}}{2} \right) ^n - \left( \frac{1-\sqrt{5}}{2} \right) ^n }{\sqrt{5}} $$}
      3. Spawning a task is expensive; we can calculate the cost.
  3. The cost of false sharing, using variants of falsesharing.cc, which is modified from the Wikipedia article.
    1. The cost is really complicated; I don't understand it well.
  4. OpenMP 4.0
    1. http://www.openmp.org/mp-documents/OpenMP4.0.0.pdf
    2. OpenMP 4 - What's New?
    3. Since it was just published last summer, it hasn't been fully implemented yet.
  5. What you should have learned from the OpenMP lectures:
    1. How to use a well-designed and widely used API that is useful on almost all current CPUs.
    2. Shared memory: the most common parallel architecture.
    3. How to structure a program to be parallelizable.
    4. Issues in parallel programs, like nondeterminism, race conditions, critical regions, etc.

NVidia device architecture, start of CUDA

  1. The above shared memory model hits a wall; CUDA handles the other side of the wall.
  2. I'll lecture from customized versions of the Stanford notes.
    1. introduction_to_massively_parallel_computing-rpi.pdf
    2. lecture_2/gpu_history_and_cuda_programming_basics.pdf

Thurs 2014-02-06 (L5)

Comments About Memory

  1. In a C++ program, on the host, there are (at least) 3 classes of memory for a variable.
    1. Global. The variable is constructed at compile time, so accesses might perhaps be faster. Global vars with non default initial values increase the executable file size. There may be an undocumented limit to the total size of all global variables, with an obscure error message.
    2. Local, on the stack. The variable is constructed when the function is called; its address relative to the base of this call frame may be constant. There may be an undocumented limit to the max stack size, with an obscure error message.
    3. On the heap. Variables are constructed whenever you want. However, constructing and destroying many variables, especially of varying sizes, fragments the heap. That makes the time superlinear -- each successive construction takes more time.
  2. My rule is to push design decisions as early in the process as possible. Constructing variables at compile time is best, at function call time is 2nd, and on the heap is worst.
    1. If I have to construct variables on the heap, I construct few and large variables, never many small ones.
    2. Often I compile the max dataset size into the program, which permits constructing the arrays at compile time. Recompiling for a larger dataset is quick (unless you're using CUDA).
      Accessing this type of variable uses one less level of pointer than accessing a variable on the heap. I don't know whether this is faster with a good optimizing compiler, but it's probably not slower.
    3. If the data will require a dataset with unpredictably sized components, such as a ragged array, then I may do the following.
      1. Read the data once to accumulate the necessary statistics.
      2. Construct the required ragged array.
      3. Reread the data and populate the array.
  3. With CUDA, the dominant problem in program optimization is optimizing the data flow. Getting the data quickly to the cores is harder than processing it. It helps big to have regular arrays, where each core reads or writes a successive entry.
    This is analogous to the hardware fact that wires are bigger (hence, more expensive) than gates.
  4. That is the opposite optimization to OpenMP, where having different threads writing to adjacent addresses will cause the false sharing problem.
  5. lecture_3/cuda_threads_and_atomics.pdf. The containing directory is this.

Week 4

Mon 2014-02-10 (L6)

Lecture material

  1. lecture_4/cuda_memories.pdf
  2. Tutorial programs from tutorials/.
    1. sum_reduction.cu

Thurs 2014-02-13

Class canceled because of snow.

Week 5

Tues 2014-02-18 (L7)

GPU summary

Here's a summary of the GPU architecture as I understand it. It's more compact than I've found elsewhere. I'll add to it from time to time. Quoted sizes are for Kepler GK110 with Compute Capability 3.5. Many of the features are new with 3.5.

  1. The host is the CPU.
  2. The device is the GPU.
  3. The device contains 14 streaming multiprocessor extremess (SMXs).
    (This is for Kepler. For Fermi, they were called streaming multiprocessors (SMs)).
  4. A thread is a sequential program with private and shared memory, program counter, etc.
  5. Threads are grouped, 32 at a time, into warps.
  6. Warps of threads are grouped into blocks.
    Often the warps are only implicit, and we consider that the threads are grouped directly into blocks.
  7. Blocks of threads are grouped into a grid, which is all the threads in the kernel.
  8. A kernel is a parallel program executing on the device.
    1. The kernel runs potentially thousands of threads.
    2. A kernel can create other kernels and wait for their completion.
    3. There may be a limit, e.g., 5 seconds, on a kernel's run time.
  9. Thread-level resources:
    1. Each thread can use up to 255 fast registers. Registers are private to the thread.
      All the threads' registers are allocated from a fixed pool. The more registers that each thread uses, the fewer threads that can run simultaneously.
    2. Each thread has 512KB slow local memory, allocated from the global memory.
    3. Local memory is used when not enough registers are available, and to store thread-local arrays.
  10. Warp-level resources:
    1. Threads are grouped, 32 at a time, into warps.
    2. Each warp executes as a SIMD, with one instruction register. At each cycle, every thread in a warp is either executing the same instruction, or is disabled. If the 32 threads want to execute 32 different instructions, then they will execute one after the other, sequentially.
      If you read in some NVidia doc that threads in a warp run independently, then continue reading the next page to get the info mentioned in the previous paragraph.
    3. If successive instructions in a warp do not depend on each other, then, if there are enough warp schedulers available, they may be executed in parallel. This is called Instruction Level Parallelism (ILP).
    4. For an array in local memory, which means that each thread will have its private copy, the elements for all the threads in a warp are interleaved to potentially increase the I/O rate.
      Therefore your program should try to have successive threads read successive words of arrays.
    5. A thread can read variables from other threads in the same warp, with the shuffle instruction. Typical operation are to read from the K-th next thread, to do a butterfly permutation, or to do an indexed read. This happens in parallel for the whole warp, and does not use shared memory.
    6. A warp vote combines a bit computed by each thread to report results like all or any.
  11. Block-level resources:
    1. A block may contain up to 1024 threads.
    2. Each block has access to 65536 fast 32-bit registers, for the use of its threads.
    3. Each block can use up to 49152 bytes of the SMX's fast shared memory. The block's shared memory is shared by all the threads in the block, but is hidden from other blocks.
      Shared memory is basically a user-controllable cache of some global data. The saving comes from reusing that shared data several times after you loaded it from global memory once.
    4. Warps in a block run asynchronously and run different instructions. They are scheduled and executed as resources are available.
    5. The threads in a block can be synchonized with __syncthreads().
      Because of how warps are scheduled, that can be slow.
    6. The threads in a block can be arranged into a 3D array, up to 1024x1024x64.
      That is for convenience, and does not increase performance.
    7. I'll talk about textures later.
  12. Streaming Multiprocessor (SMX) - level resources:
    1. Each SMX has 192 single-precision CUDA cores, 64 double-precision units, 32 special function units, and 32 load/store units.
    2. A CUDA core is akin to an ALU. The cores, and all the units, are pipelined.
    3. A CUDA core is much less powerful than one core of an Intel Xeon. My guess is 1/20th.
    4. Beware that, in the CUDA C Programming Guide, NVidia sometimes calls an SMX a core.
    5. The limited number of, e.g., double precision units means that an DP instruction will need to be scheduled several times for all the threads to execute it. That's why DP is slower.
    6. Each SMX has 32K fast 4-byte registers, to be divided among its threads.
      (Yes this sounds inconsistent with the number of registers per block. When I figure it out, I'll edit this. I'm synthesizing this doc from several sources, some of which are older.)
    7. Each SMX has 4 warp schedulers and 8 instruction dispatch units.
    8. 64 warps can simultaneously reside in an SMX.
    9. Therefore up to 32x64=2048 threads can be executed in parallel by an SMX.
    10. Up to 16 blocks that can simultaneously be resident in an SMX.
      However, if each block uses too many resources, like shared memory, then this number is reduced.
      Each block sits on only one SMX; no block is split. However a block's warps are executed asynchronously (until synced).
    11. Each SMX has 64KiB fast memory to be divided between shared memory and an L1 cache. Typically, 48KiB is used for the shared memory, to be divided among its resident blocks, but that can be changed.
    12. The L1 cache can cache local or global memory.
    13. Each SMX has a read-only data cache of 48KB to cache the global constant memory.
    14. Each SMX has 8 profile counters that can be incremented by any thread.
  13. Grid-level resources:
    1. The blocks in a grid can be arranged into a 3D array. up to {$ (2^{31}-1, 2^{16}-1, 2^{16}-1) $}.
    2. Blocks in a grid are queued and executed as resources are available, in an unpredictable parallel or serial order. Therefore they should be independent of each other. Trying to sync blocks can cause a deadlock.
    3. The number of instructions in a kernel is limited. 2M.
    4. Any thread can stop the kernel by calling assert.
  14. Device-level resources:
    1. There is a large and slow 6GB global memory, which persists from kernel to kernel.
      Transactions to global memory are 128 bytes.
      Host memory can also be memory-mapped into global memory, although the I/O rate will be lower.
      Reading from global memory can take hundreds of cycles. A warp that does this will be paused and another warp started. Such context switching is very efficient. Therefore device throughput stays high, although there is a latency. This is called '''Thread Level Parallelism (TLP)''' and is a major reason for GPU performance.
      That assumes that an SMX has enough active warps that there is always another warp available for execution. That is a reason for having warps that do not use all the resources (registers etc) that they're allowed to.
    2. There is a 1536KB L2 cache, for sharing data between SMXs.
    3. There is a 64KiB Small and fast global constant memory, , which also persists from kernel to kernel. It is implemented as a piece of the global memory, made fast with caches.
      (Again, I'm still resolving this apparent contradiction).
    4. Grid Management Unit (GMU) schedules (pauses, executes, etc) grids on the device. This is more important because grids can now start other grids (Dynamic Parallelism).
    5. Hyper-Q: 32 simultaneous CPU tasks can launch kernels into the queue; they don't block each other. If one kernel is waiting, another runs.
    6. CUDA Work Distributor (CWD) dispatches 32 active grids at a time to the SMXs. There may be 1000s of grids queued and waiting.
    7. GPU Direct: Other devices can DMA the GPU memory.
  15. Refs:
    1. http://www.geeks3d.com/20100606/gpu-computing-nvidia-cuda-compute-capability-comparative-table/
    2. The CUDA program deviceDrv.
    3. http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
    4. Better Performance at Lower Occupancy, Vasily Volkov, UC Berkeley, 2010.
    5. https://www.pgroup.com/lit/articles/insider/v2n1a5.htm - well written but old.
    6. http://www.theregister.co.uk/2012/05/15/nvidia_kepler_tesla_gpu_revealed/

(I'll keep adding to this. Suggestions are welcome.)

  1. CUDA function qualifiers:
    1. __global__ device function called from host, starting a kernel.
    2. __device__ device function called from device function.
    3. __host__ (default) host function called from host function.
  2. CUDA variable qualifiers:
    1. __shared__
    2. __device__ global
    3. __constant__
    4. (nothing) register if scalar, or local if array or if no more registers available.

Installing NVidia drivers; System in low graphics mode

Here are some relevant pages.

  1. http://askubuntu.com/questions/141606/how-to-fix-the-system-is-running-in-low-graphics-mode-error
  2. http://askubuntu.com/questions/61396/installing-nvidia-drivers/61433#61433
  3. http://techiesf1.blogspot.co.uk/2013/03/ubuntu-is-running-in-low-graphics-mode.html
  4. http://askubuntu.com/questions/323108/error-system-is-running-in-low-graphics-mode-solved-only-partially

CUDA FAQs

Here are some that I've found useful.

  1. https://developer.nvidia.com/cuda-faq
  2. http://stackoverflow.com/questions/tagged/cuda?sort=faq
  3. http://www.geforce.com/hardware/technology/cuda/faq

Useful commands etc

  1. nvidia-smi
  2. nvcc -o foo -lineinfo -G -O3 -arch=sm_35 --ptxas-options=-v foo.cu -lglut
    Don't always use all of the above options at once.

ACM webcast: "Achieve Massively Parallel Acceleration with GPUs"

Thurs 2/27, 1pm, http://www.acm.org/news/featured/webinars

Lecture material

  1. Review tutorials/sum_reduction.cu
  2. lecture_4/cuda_memories.pdf from slide 39
  3. lecture_5/performance_considerations.pdf/
  4. Tutorial programs from tutorials/.
    1. matrix_multiplication.cu
    2. shared_variables.cu

Thurs 2014-02-20 (L8)

  1. lecture_6/parallel_patterns_1.pdf, up through slide 26.
  2. Adjourn at 4:50 for the Bloomberg Tech Talk in CII3045.
  3. Homework 3 is online, due in a week.

Week 6

Mon 2014-02-24 (L9)

CUDA Changes

Since the Stanford slides, CUDA has been enhanced in a few ways.

  1. Typed pointers. Pointers contain info labeling them as on the host or device, so that copying is easier.
  2. Unified Virtual Addressing (UVA). Memory can be allocated on the host, pinned into real memory, and directly referenced by a device function via a special pointer. Properties:
    1. No need to copy data from the host into device global memory. That saves your time and machine time.
    2. However each access to host memory is even slower than to device global memory.
    3. At some point, copying the host data to device memory becomes advisable.
  3. Unified memory. This is new in CUDA 6, just released last week, and installed on geoxeon today. This supplements UVA by automatically caching host data in device global memory.

NVidia docs

My local web cache of all relevant NVidia docs is here (CUDA 6 will be added soon). They are also on geoxeon in /local/cuda-{5.5,6.0}/docs/

Lecture material

  1. Programs from Gfiles:rpi/:
    1. Makefile shows how to set the compilation to list the number of registers and amount of shared memory used by the functions.
    2. add2.cu adds 2 vectors the traditional way.
    3. add3.cu adds 2 vectors with UVA.
  2. lecture_6/parallel_patterns_1.pdf, up through slide 26.
  3. lecture_7/parallel_patterns_2.pdf.

Thurs 2014-02-27 (L10)

Comparison of array storage modes

The programs are add2, add3, add5. They add two 400,000,000 element integer arrays, each 1.6GB, using geoxeon.

  1. This ignores array allocations, mallocs, etc, which take 1500 - 2000 msecs.
  2. It also ignores initializing the arrays by calling rand() 400M times, which takes 7000 msec, and is, by far, the slowest part of the whole program.
  3. Using the CPU: Adding the arrays takes about 500 msec (using -O3).
  4. add2.cu: Creating host arrays, using cudaMemcpy to the device, running a kernel, using cudaMemcpy back:
    1. Copying a 1.6GB array either way: 500 msec.
    2. Running the kernel (blocksize: 256) : 30 msec.
    3. Total about 1600 msec.
  5. add3.cu: Using UVA, creating 3 pinned host arrays with cudaHostAlloc, and running a kernel that directly accesses them.
    1. Running the kernel (blocksize: 256) : 530 msec.
    2. Total about 600 msec.
  6. add5.cu: Using CUDA 6.0 Unified Memory,
    1. Each CudaMallocManaged: 500 msec.
    2. Kernel: 545 msec.
    3. Total: about 2000 msec.

Notes:

  1. Timing using host calls like clock() agrees with using CUDA events, if things are synchronized.
  2. From fastest to slowest:
    1. CPU.
    2. UVA.
    3. Allocating host array, cudaMemcpy to device, kernel, and cudaMemcpy back.
    4. Unified memory.
    That would totally depend on how much work the kernel does.
  3. However, at this stage don't worry about the times; use the easiest programming technique.

Lecture material

  1. Textures:
    1. You read entries from a 1-D, 2-D, or 3-D array. If your subscripts do not hit exactly on an entry, or are outside bounds, the system interpolates a value.
    2. You choose the interpolation rule: nearest neighbor, linear, etc.
    3. You choose the out-of-bounds rule: clamp, wrap, etc.
    4. This is useful whenever interpolation is desired, which is more than just for images.
    5. http://www.math.ntu.edu.tw/~wwang/mtxcomp2010/download/cuda_04_ykhung.pdf, slides 21 and up.
    6. 0_Simple/simpleTexture/simpleTexture.cu from the CUDA SDK.

Week 7

Mon 2014-03-03 (L9)

  1. Homework 4 is up, due 2014-03-20.
  2. More texture refs; please read them for Thurs:
    1. http://www.eecg.toronto.edu/~moshovos/CUDA08/slides/008%20-%20Textures.ppt
    2. http://www.cudahandbook.com/uploads/Chapter_10._Texturing.pdf
    3. http://docs.nvidia.com/cuda/cuda-c-programming-guide/#texture-and-surface-memory
    4. http://cuda-programming.blogspot.com/2013/02/cuda-array-in-cuda-how-to-use-cuda.html
  3. More texture details:
    1. A texture can have many layers, all of the same size. This is not a mipmap.
    2. A cube-map texture has a texture for each face of a cube.
    3. 2D and 3D cuda arrays, used for textures, are stored in a space-filling curve order, to increase the probability that two elements with similar indices are close in memory.
    4. Surface memory is like a texture than can be written to. However changes may not be visible until a new kernel is executed.
  4. CUDA and OpenGL
    1. You can create a buffer object in OpenGL and then access it from CUDA.
  5. Thrust
    1. C++ STL-like layer on top of CUDA.
    2. Functional-programming philosophy.
    3. Easier programming, once you get used to it.
    4. Code is efficient.
    5. Uses some unusual C++ techniques, like overloading operator().
    6. lecture_8/introduction_to_thrust.pdf, up through slide 23.
    7. Various little demo programs I'll upload to the web, and which will be on geoxeon in /opt/parallel/thrust .

Thurs 2014-03-06 (L10)

Thrust experiments

On geoxeon, in /opt/parallel/thrust/sortreduce.cu, I did some experiments on sorting and reducing a 500M element vector on the host and on the device. The times, in seconds:

Operation Time on host Time on device
Create a vector .74 .014
Transform a vector with a simple functor .8 .012
Sort 11. .66
Reduce .5 .013
Copy from device to host .6

The device is 20x to 50x faster than the host on these simple operations.

Sorting with Thrust on device

Even if most of your work is on the host, it can pay to offload some work to the device. E.g., sorting 1,000,000,000 ints takes 6.2 seconds (floats take 8.4 secs). That excludes time to create the array, which you'd have to do anyway, but includes time for the device to read and write the data (via unified memory; this is too much data to fit in the device). A complete program is on geoxeon in /opt/parallel/thrust/devicesort3.cu and online at thrust/devicesort3.cu . The critical code is this:

  int *p;
  cudaHostAlloc(&p, N*sizeof(T), cudaHostAllocMapped);
  // here write your data into p
  thrust::sort(thrust::device, p, p+N);

This requires Thrust 1.7 (released last summer), and uses Unified Memory. Getting the program this small took a lot of time.

Sort has a new argument saying to use the device. There is no need now for distinguished host and device pointers. This is only one way that CUDA and Thrust are getting easier to use.

Even a simple op like comparing two arrays is a few times faster on the device (including the access time back to the host). If the data is already on the device, it's many times faster.

One problem with using these new features is that they are sometimes buggy. Today I found an error in thrust::tabulate. A few hours after I reported it, the error was acknowledged and fixed in the development version. That's service!

A few months back, I found an error in boost that was acknowledged and fixed. SW seems to break when I use it.

Thrust intro documentation is here: http://docs.nvidia.com/cuda/thrust/. The git page, with complete documentation, intro documentation, and many examples is here: http://thrust.github.io/.

(Yes I know that citing links like that is considered by some to be bad form. I don't care. It can be useful to have the link URL displayed, e.g., when the page is printed.)

Note that rewriting the examples to use unified memory and version 1.7 would make them shorter and clearer.

Nevertheless, the examples teach you how to easily do things that you mightn't have thought easy at all.

Faster graphical access to geoxeon

X over ssh is very slow.

Here are some things I've discovered that help.

  1. Use a faster, albeit less secure, cipher:
    ssh -c arcfour,blowfish-cbc -C geoxeon.ecse.rpi.edu
  2. Use xpra; here's an example:
    1. On geoxeon:
      xpra start :77; DISPLAY=:77 xeyes&
      Don't everyone use 77, pick your own numbers in the range 20-99.
    2. On server, i.e., your machine:
      xpra attach ssh:geoxeon.ecse.rpi.edu:77
    3. I suspect that security is weak. When you start an xpra session, I suspect that anyone on geoxeon can display to it. I suspect that anyone with ssh access to geoxeon can try to attach to it, and the that 1st person wins.
  3. Use nx, which needs a server, e.g., FreeNX.

See CUDA graphics

At the end of today's class, we'll try to see the demos in my lab.

Term project

It's time to start thinking of your term projects. They can be anything demonstrating a knowledge of topics covered in this course -- either an implementation or a research paper. Combining your own research with a term project is fine.

Week 8

Mon 2014-03-17 (L11)

Thrust notes

  1. In functions like reduce and transform, you often see an argument like thrust::multiplies<float>(). The syntax is as follows:
    1. thrust::multiplies<float> is a class.
    2. It overloads operator().
    3. However, in the call to reduce, thrust::multiplies<float>() is calling the default constructor to construct a variable of class thrust::multiplies<float>, and passing it to reduce.
    4. reduce will treat its argument as a function name and call it with an argument, triggering operator().
    5. You may also create your own variable of that class, e.g., thrust::multiplies<float> foo. Then you just say foo in the argument list, not foo().
    6. The optimizing compiler will replace the operator() function call with the defining expression and then continue optimizing. So, there is no overhead, unlike if you passed in a pointer to a function.
  2. Sometimes, e.g., in saxpy.cu, you see saxpy_functor(A).
    1. The class saxpy_functor has a constructor taking one argument.
    2. saxpy_functor(A) constructs and returns a variable of class saxpy_functor and stores A in the variable.
    3. The class also overloads operator().
    4. (Let's call the new variable foo). foo() calls operator() for foo; its execution uses the stored A.
    5. Effectively, we did a closure of saxpy_functor; this is, we bound a property and returned a new, more restricted, variable or class.
  3. The Thrust examples teach several non-intuitive paradigms. As I figure them out, I'll describe a few. My descriptions are modified and expanded versions of the comments in the programs. This is not a list of all the useful programs, but only of some where I am adding to their comments.
    1. arbitrary_transformation.cu and dot_products_with_zip.cu. show the very useful zip_iterator. Using it is a 2-step process.
      1. Combine the separate iterators into a tuple.
      2. Construct a zip iterator from the tuple.
      Note that operator() is now a template.
    2. boundingbox.cu finds the bounding box around a set of 2D points.
      1. The main idea is to to a reduce. However, the combining operation, instead of addition, is to combine two bounding boxes to find the box around them.
    3. bucket_sort2d.cu overlays a grid on a set of 2D points and finds the points in each grid cell (bucket).
      1. The tuple is an efficient class for a short vector of fixed length.
      2. Note how random numbers are generated. You combine an engine that produces random output with a distribution.
        However you might need more complicated coding to make the numbers good when executing in parallel. See monte_carlo_disjoint_sequences.cu.
      3. The problem is that the number of points in each cell is unpredictable.
      4. The cell containing each point is computed and that and the points are sorted to bring together the points in each cell.
      5. Then lower_bound and upper_bound are used to find each bucket in that sorted vector of points.
      6. See the lower_bound description in http://thrust.github.io/doc/group__vectorized__binary__search.html .
    4. expand.cu takes a vector like V= [0, 10, 20, 30, 40] and a vector of repetition counts, like C= [2, 1, 0, 3, 1]. Expand repeats each element of V the appropriate number of times, giving [0, 0, 10, 30, 30, 30, 40]. The process is as follows.
      1. Since the output vector will be longer than the input, the main program computes the output size, byt reduce summing C, and constructs a vector to hold the output.
      2. Exclusive_scan C to obtain output offsets for each input element: C2 = [0, 2, 3, 3, 6].
      3. Scatter_if the nonzero counts into their corresponding output positions. A counting iterator, [0, 1, 2, 3, 4] is mapped with C2, using C as the stencil, giving C3 = [0, 0, 1, 3, 0, 0, 4].
      4. An inclusive_scan with max fills in the holes in C3, to give C4 = [0, 0, 1, 3, 3, 3, 4].
      5. Gather uses C4 to gather elements of V: [0, 0, 10, 30, 30, 30, 40].
  4. mode.cu shows:
    1. Counting the number of unique keys in a vector.
      1. Sort the vector.
      2. Do an inner_product. However, instead of the operators being times and plus, they are not equal to the next element and plus.
    2. Counting their multiplicity.
      1. Construct vectors, sized at the number of unique keys, to hold the unique keys and counts.
      2. Do a reduce_by_keys on a constant_iterator using the sorted vector as the keys. For each range of identical keys, it sums the constant_iterator. That is, it counts the number of identical keys.
      3. Write a vector of unique keys and a vector of the counts.
    3. Finding the most used key (the mode).
      1. Do max_element on the counts vector.
  5. repeated_range.cu repeats each element of an N-vector K times: repeated_range([0, 1, 2, 3], 2) -> [0, 0, 1, 1, 2, 2, 3, 3]. It's a lite version of expand.cu, but uses a different technique.
    1. Here, N=4 and K=2.
    2. The idea is to construct a new iterator, repeated_range, that, when read and incremented, will return the proper output elements.
    3. The construction stores the relevant info in structure components of the variable.
    4. Treating its value like a subscript in the range [0,N*K), it divides that value by K and returns that element of its input.
    See also strided_range.cu and tiled_range.cu.
  6. set_operations.cu. This shows methods of handling an operation whose output is of unpredictable size. The question is, is space or time more important?
    1. If the maximum possible output size is reasonable, then construct an output vector of that size, use it, and then erase it down to its actual size.
    2. Or, run the operation twice. The 1st time, write to a discard_iterator, and remember only the size of the written data. Then, construct an output vector of exactly the right size, and run the operation again.
      I use this technique a lot with ragged arrays in sequential programs.
  7. sparse_vector.cu represents and sums sparse vectors.
    1. A sparse vector has mostly 0s.
    2. The representation is a vector of element indices and another vector of values.
    3. Adding two sparse vectors goes as follows.
      1. Allocate temporary index and element vectors of the max possible size (the sum of the sizes of the two inputs).
      2. Catenate the input vectors.
      3. Sort by index.
      4. Find the number of unique indices by applying inner_product with addition and not-equal-to-next-element to the indices, then adding one.
        E.g., applied to these indices: [0, 3, 3, 4, 5, 5, 5, 8], it gives 5.
      5. Allocate exactly enough space for the output.
      6. Apply reduce_by_key to the indices and elements to add elements with the same keys.
        The size of the output is the number of unique keys.

Thurs 2014-03-20 (L12)

  1. Homework 5 is online.
  2. More Thrust tutorials; the different wording might be more attractive. FYI only, I won't cover them in class.
    1. Boston U
    2. Thrust - A Productivity-Oriented Library for CUDA. This was listed on http://code.google.com/p/thrust/downloads/list .
    3. Thrust by example, part 2. Some old stuff, well reviewed, and some new stuff.
  3. Today I'll cover more thrust examples.
    Online site: http://code.google.com/p/thrust/source/browse/#hg%2Fexamples%253Fstate%253Dclosed
    Geoxeon copy: /opt/parallel/thrust_examples .

Week 9

Mon 2014-03-24 (L13)

GPU Techology Conferences

GTC2014 is this week.

Here are some interesting talks from 2013. I'll preview them; if you're interested, watch the complete videos and read the slides.

  1. Smashing galaxies
    Smashing Galaxies: A Hierarchical Road from Top to Bottom
    Jeroen Bedorf (Leiden Observatory, Leiden University), Evghenii Gaburov (SARA, Amsterdam, the Netherlands)
    Find out how one can leverage massive GPU parallelism to assemble fast sparse octree construction and traverse methods by combining parallel primitives such as scan and sort algorithms. These techniques have culminated in Bonsaia hierarchical gravitational N-body code which is used to study the formation and mergers of galaxies in the present day Universe. With the advent of Kepler''s dynamic parallelism, we explore the new venues that this technology opens for scalable implementations of such hierarchical algorithms. We conclude the session with cutting edge simulations complemented with spectacular visualizations that are produced in collaboration with NVIDIA''s visualization experts.
  2. Investigating New Numerical Techniques for Reservoir Simulations on GPUs
    Hatem Ltaief (Supercomputing Laboratory, KAUST), Ahmad Abdelfattah (Center of Extreme Computing, KAUST), Rio Yokota (Center of Extreme Computing, KAUST)
    Reservoir simulation involve sparse iterative solvers for linear systems that arise from implicit discretizations of coupled PDEs from high-fidelity reservoir simulators. One of the major bottlenecks in these solvers is the sparse matrix-vector product. Sparse matrices are usually compressed in some format (e.g., CSR, ELL) before being processed. In this talk, we focus on the low-level design of a sparse matrix-vector (SpMV) kernel on GPUs. Most of the relevant contributions focus on introducing new formats that suit the GPU architecture such as the diagonal format for diagonal matrices and the blocked-ELL format for sparse matrices with small dense blocks. However, we target both generic and domain-specific implementations. Generic implementations basically target the CSR and ELL formats, in order to be part of the KAUST-BLAS library. More chances for further optimizations appear when the matrix has specific structure. In the talk, we will present the major design challenges and outlines, and preliminary results. The primary focus will be on the CSR format, where some preliminary results will be shown. The other bottleneck of reservoir simulations is the preconditioning in the sparse matrix solver. We investigate the possibility of a Fast Multipole Method based technique on GPUs as a compute-bound preconditioner.
  3. Deploying General Purpose GPUs in a Manufacturing Environment: Software and CFD Development in Bicycle Manufacturing Joe Dutka (Acer Inc.)
    The session will share the work of the bicycle company Velocite and researchers at NCKU in Taiwan as they used GPU computing to design their next generation of bicycles. The platform was 2 Acer AT350 F2 servers with both Quadro and Tesla cards and Intel Xeon CPUs for hybrid computation. Rather than a theoretical presentation, the session will focus on real-world implementation and the difficulties overcome to develop software and bike design. Taking place at the same time, the bike will debut in the Taipei Bike Show on March 20.
  4. CUDA-based Monte Carlo Simulation for Radiation Therapy Dosimetry
    Nicholas Henderson (Stanford University)
    Learn about a CUDA adaptation of Geant4, a large-scale Monte Carlo particle physics toolkit. Geant4 was originally designed to support the needs of high energy physics experiments at SLAC, CERN and other places around the world. Geant4 is an extensive toolkit which facilitates every aspect of the simulation process and has been successfully used in many other domains. Current interest is radiation therapy dosimetry. For this application the geometry is simple and the model physics is limited to low energy electromagnetics. These features allow efficient tracking of many particles in parallel on the GPU.
  5. What Does It Take to Accelerate SPICE on the GPU? Maxim Naumov (NVIDIA)
    In this talk we will introduce the basic concepts behind The Simulation Program with Integrated Circuit Emphasis (SPICE) and discuss in detail the two most time consuming parts of the circuit simulation: the device model evaluation and the solution of large sparse linear systems. In particular, we focus on the evaluation of the basic models, such as resistor, capacitor and inductor as well as more complex transistor (BSIM4v7) model on the GPU. Also, we discuss the solution of sets of linear systems that are performed throughout the simulation. We take advantage of the fact that the coefficient matrices in these linear systems have the same sparsity pattern (and often end up with the same pivoting strategy) and show how to obtain their solution using a direct method on the GPU. Finally, we present numerical experiments and discuss future work. Co-authors Francesco Lannutti, Sharanyan Chetlur, Lung Sheng Chien, Philippe Vandermersch.

Week 9b

Thurs 2014-03-27 (L14)

2014 GPU Technology Conference

Nvidia made various announcements. However, various pre-announced products from 2013 have silently vanished from the charts.

2013 GPU Technology Conference

Talks on debugging

  1. http://on-demand.gputechconf.com/gtc/2013/presentations/S3037-S3038-Debugging-CUDA-Apps-Linux-Mac.pdf Comments on particular slides:
    1. slide 10: on geoxeon, see /opt/parallel/rpi/printf.cu
    2. slide 13: see /opt/parallel/rpi/mycuda.h
    3. slide 16: see /opt/parallel/rpi/assert.cu
    4. slide 24: see /opt/parallel/rpi/race.cu and race2.cu
  2. http://on-demand.gputechconf.com/gtc/2013/presentations/S3045-Getting-Most-From-GPU-Accelerated-Clusters.pdf
  3. http://on-demand.gputechconf.com/gtc/2013/presentations/S3051-GPU-Computing-Languages-Libraries-Tools.pdf by Jack Dongarra

Tools to try:

  1. cuda-memcheck --tool racecheck --racecheck-report all foo
  2. cuda-gdb foo
  3. nvprof foo
  4. nvvp foo

Geoxeon's local copy of a lot of CUDA documentation is in /local/cuda/ .

Week 10

Mon 2014-03-31 (L15)

  1. Nice Nvidia parallel summary from Oak Ridge - FYI
  2. GPU Computing with CUDA, Lecture 8 - CUDA Libraries - CUFFT, PyCUDA from Christopher Cooper, BU

cuFFT Notes

  1. cuFFT is inspired by FFTW (the fastest Fourier transform in the west), which they say is so fast that it's as fast as commercial FFT packages.
  2. I.e., sometimes commercial packages may be worth the money.
  3. Although the FFT is taught for N a power of two, users often want to process other dataset sizes.
  4. The problem is that the optimal recursion method, and the relevant coefficients, depends on the prime factors of N.
  5. FFTW and cuFFT determine the good solution procedure for the particular N.
  6. Since this computation takes time, they store the method in a plan.
  7. You can then apply the plan to many datasets.
  8. If you're going to be processing very many datasets, you can tell FFTW or cuFFT to perform sample timing experiments on your system, to help in devising the best plan.
  9. That's a nice strategy that some other numerical SW uses.
  10. One example is Automatically Tuned Linear Algebra Software (ATLAS) .

cuBLAS etc Notes

  1. BLAS is an API for a set of simple matrix and vector functions, such as multiplying a vector by a matrix.
  2. These functions' efficiency is important since they are the basis for widely used numerical applications.
  3. Indeed you usually don't call BLAS functions directly, but use higher-level packages like LAPACK that use BLAS.
  4. There are many implementations, free and commercial, of BLAS.
  5. cuBLAS is one.
  6. One reason that Fortran is still used is that, in the past, it was easier to write efficient Fortran programs than C or C++ programs for these applications.
  7. There are other, very efficient, C++ numerical packages. (I can list some, if there's interest).
  8. Their efficiency often comes from aggressively using C++ templates.

Thurs 2014-04-03 (L16)

Future scheduling

  1. There will be no class on Mon May 5.
  2. Therefore class presentations will be the previous week, April 28 and May 1.

Today's student summaries of some of the best GTC 2013 talks

  1. Wenli Li: GPU-friendly Data Compression, Jens Schneider (King Abdullah University of Science and Technology)
  2. David Hedin: RocketSim: A GPU Based Simulation Accelerator for Chip-verification, Uri Tal (Rocketick Inc.)
  3. Dan Benedetti: GPU-Acceleration-For-Hair-Simulation-Rendering.pdf . Francesco Giordana, Sarah Macdonald, and Gianluca Vatinno at Double Negative VFX
  4. Salles Maghalaes: Performance-Optimization-Strategies-for-GPU-Accelerated-Apps.pdf
  5. Yi Deng: https://www.udacity.com/course/cs344‎

Week 11

Mon 2014-04-07 (L17)

More presentations from GTC presentations:

  1. Yan Ou: Real-time Traffic Sign Recognition on Mobile Processors
  2. Ruoyang Yao: CUDA-­‐based Geant4 Monte Carlo Simulation for Radiation Therapy by N.Henderson & K.Murakami

Supercomputing videos

I'll show that start of various videos in class. If you're interested, you can watch the rest on your own.

  1. Jack Dongarra: Algorithmic and Software Challenges For Numerical Libraries at Exascale
    This was presented at the Exascale Computing in Astrophysics Conference 2013, whose Youtube channel is here .
  2. High Performance Computing Innovation Center at LLNL from the 2014 HPCAC Stanford HPC & Exascale Conference.
  3. Petros Koumoutsakos: Fast Machines and Cool Algorithms for (Exascale) Flow Simulations from the 2014 HPCAC Stanford HPC & Exascale Conference.
  4. Architecture-aware Algorithms for Peta and Exascale Computing - Dongarra at SC13.
  5. Lorena Barba Presents: Flying Snakes on GPUs at SC13.
  6. Joel Primack: Supercomputing the Universe,

Conference Exascale Computing in Astrophysics.

  1. There are many interesting talks at Nvidia's GPU technology theater at SC13.

Thurs 2014-04-10 (L18)

Matlab

  1. Good for applications that look like matrices.
    Considerable contortions required for, e.g., a general graph. You'd represent that with a large sparse adjacency matrix.
  2. Using explicit for loops is slow.
  3. Efficient execution when using builtin matrix functions,
    but can be difficult to write your algorithm that way, and
    difficult to read the code.
  4. Very expensive and getting more so.
    Many separately priced apps.
  5. Uses state-of-the-art numerical algorithms.
    E.g., to solve large sparse overdetermined linear systems.
    Better than Mathematica.
  6. Most or all such algorithms also freely available as C++ libraries.
    However, which library to use?
    Complicated calling sequences.
    Obscure C++ template error messages.
  7. Graphical output is mediocre.
    Mathematica is better.
  8. Various ways Matlab can execute in parallel
    1. Operations on arrays can execute in parallel.
      E.g. B=SIN(A) where A is a matrix.
    2. Automatic multithreading by some functions
      Various functions, like INV(a), automatically use perhaps 8 cores.
      The '8' is a license limitation.
      Which MATLAB functions benefit from multithreaded computation?
    3. PARFOR
      Like FOR, but multithreaded.
      However, FOR is slow.
      Many restrictions, e.g., cannot be nested.
      http://www.mathworks.com/help/distcomp/introduction-to-parallel-solutions.html
      Start pools first with: MATLABPOOL OPEN 12
      Limited to 12 threads.
      Can do reductions.
    4. Parallel Computing Server
      This runs on a parallel machine, including Amazon EC2.
      Your client sends batch or interactive jobs to it.
      Many Matlab toolboxes are not licensed to use it.
      This makes it much less useful.
    5. GPU computing
      Create an array on device with gpuArray
      Run builtin functions on it.
      http://www.mathworks.com/help/distcomp/run-built-in-functions-on-a-gpu.html

Week 12

Mon 2014-04-14 (L18)

Student presentations of exciting talks, ctd

  1. Steve Han

Final project deliverables

  1. Report to be uploaded to LMS for May 5. If the file is too big, then upload a link to it.
  2. Talk to the class, on either April 27 or May 1. You may pick the length, but I suggest 20 minutes. A large team might want more time.
    Please email me your team, your desired length, and your preferred date. I will fill up the days in the order that the emails arrive.

More talks from the GPU Technology Theater at SC13

Denver CO, Nov 17-22, 2013.

http://www.nvidia.com/object/sc13-technology-theater.html#

That page has links to videos and usually also the slides.

  1. Applications of Programming the GPU Directly from Python Using NumbaPro. Travis Oliphant Co-Founder and CEO Continuum Analytics.
  2. Tomorrow's Exascale Systems: Not Just Bigger Versions of Today's Peta-Computers Thomas Sterling, Executive Associate Director & Chief Scientist, Indiana University.

Other talk

Joel Primack: Supercomputing the Universe Conference Exascale Computing in Astrophysics, 8-13 Sept 2013.

Thurs 2014-04-17 (L19)

Term project talks

Since no one emailed me with a preference, here is the project presentation schedule.

  1. Monday, April 27
    1. Dan Benedetti
    2. Steve Han
    3. David Hedin
  2. Thurs May 1
    1. Wenli Li
    2. Salles Maghalaes
    3. Yi Deng, Yan Ou, Ruoyang Yao

Mathematica in parallel.

You terminate an input command with shift-enter.

Some Mathematica commands;

  Sin[1.]
  Plot[Sin[x],{x,-2,2}]
  a=Import["/opt/parallel/mathematica/mtn1.dat"]
  Information[a]
  Length[a]
  b=ArrayReshape[a,{400,400}]
  MatrixPlot[b]
  ReliefPlot[b]
  ReliefPlot[b,Method->"AspectBasedShading"]
  ReliefPlot[MedianFilter[b,1]]
  Dimensions[b]
  Eigenvalues[b]   When you get bored waiting, type  alt-.
  Eigenvalues[b+0.0]
  Table[ {x^i y^j,x^j y^i},{i,2},{j,2}]
  Flatten[Table[ {x^i y^j,x^j y^i},{i,2},{j,2}],1]
  StreamPlot[{x*y,x+y},{x,-3,3},{y,-3,3}]
  $ProcessorCount
  $ProcessorType
  Select Parallel Kernel Configuration and Status in the Evaluation menu
  ParallelEvaluate[$ProcessID]
  PrimeQ[101]
  Parallelize[Table[PrimeQ[n!+1],{n,400,500}]]
  merQ[n_]:=PrimeQ[2^n-1]
  Select[Range[5000],merQ]
  ParallelSum[Sin[x+0.],{x,0,100000000}]
  Parallelize[  Select[Range[5000],merQ]]
  Needs["CUDALink`"]  note the back quote
  CUDAInformation[]  
  Manipulate[n, {n, 1.1, 20.}]
  Plot[Sin[x], {x, 1., 20.}]
  Manipulate[Plot[Sin[x], {x, 1., n}], {n, 1.1, 20.}]
  Integrate[Sin[x]^3, x]
  Manipulate[Integrate[Sin[x]^n, x], {n, 0, 20}]
  Manipulate[{n, FactorInteger[n]}, {n, 1, 100, 1}]
  Manipulate[Plot[Sin[a x] + Sin[b x], {x, 0, 10}], {a, 1, 4}, {b, 1, 4}]

Unfortunately there's a problem that I'm still debugging with the Mathematica - CUDA interface.

Week 13

Mon 2014-04-21 (L20)

http://on-demand.gputechconf.com/gtc/2013/presentations/S3575-Supermicro-GPU-Optimized-GRID-HPC-Workstations.pdf

http://on-demand.gputechconf.com/gtc/2013/presentations/S3151-Emergent-Numerical-Algorithms.pdf

http://on-demand.gputechconf.com/gtc/2013/presentations/S3045-Getting-Most-From-GPU-Accelerated-Clusters.pdf

Thurs 2014-04-24 (L21)

Cloud computing

The material is from Wikipedia, which appeared better than any other sources that I could find.

  1. Hierarchy:
    1. IaaS (Infrastructure as a Service)
      1. Sample functionality: VM, storage
      2. Examples:
        1. Google_Compute_Engine
        2. Amazon_Web_Services
        3. OpenStack : compute, storage, networking, dashboard
    2. PaaS (Platform ...)
      1. Sample functionality: OS, Web server, database server
      2. Examples:
        1. OpenShift
        2. Cloud_Foundry
        3. Hadoop :
          1. distributed FS, Map Reduce
          2. derived from Google FS, map reduce
          3. used by Facebook etc.
    3. SaaS (Software ...)
      1. Sample functionality: email, gaming, CRM, ERP
  2. Cloud_computing_comparison
  3. Virtual machine
    1. Virtualization
    2. Hypervisor
    3. Xen
    4. Kernel-based_Virtual_Machine
    5. QEMU
    6. VMware
    7. Comparison_of_platform_virtual_machines
  4. Distributed storage
    1. Virtual_file_system
    2. Lustre_
    3. Comparison_of_distributed_file_systems
    4. Hadoop_distributed_file_system
  5. See also
    1. VNC
    2. Grid_computing
      1. decentralized, heterogeneous
      2. used for major projects like protein folding

Week 14

Mon 2014-04-28 (L22)

Course recap

  1. My teaching style is to work from particulars to the general.
  2. You've seen OpenMP, a tool for shared memory parallelism.
  3. You've seen the architecture of NVidia's GPU, a widely used parallel system, and CUDA, a tool for programming it.
  4. You've seen Thrust, a tool on top of CUDA, built in the C++ STL style.
  5. You've seen how widely used numerical tools like BLAS and FFT have versions built on CUDA.
  6. You've seen the Matlab and Mathematica development environments, which have tools for parallel programming.
  7. You've had a chance to program in all of them on geoxeon, my research lab machine with dual 8-core Xeons and K20Xm and K5000 NVidia boards.
  8. You've seen a summary of cloud computing, which enables parallelism at the level of multiple commodity virtual machines.
  9. You seen talks by leaders in high performance computing, such as Jack Dongarra.
  10. Now, you can inductively reason towards general design rules for shared and non-shared parallel computers, and for the SW tools to exploit them.

Term project presentations, 1

  1. Dan Benedetti, Parallel Project
  2. Steve Han
  3. David Hedin

Thurs 2014-05-01 (L22)

Term project presentations, 2

  1. Wenli Li
  2. Salles Maghalaes
  3. Yi Deng, Yan Ou, Ruoyang Yao

Week 15

Mon 2014-05-05

  1. No class.
  2. Term projects due. Submit to LMS or email to me. If the file is big, the upload to your favorite server, or put on geoxeon, and send me a link.

After the semester

  1. Before or after you leave RPI, when I'm not too busy, you're welcome to talk to me about most any legal topic.
  2. You're welcome to keep using your geoxeon accounts, including for your research indefinitely. Understand that this an unsupported research machine. If it breaks, it might not get fixed immediately. Some capabilities might never get replaced. If the disk crashes, you lose your files. If one of your accounts is used to attack geoxeon, the rules will change. Etc, etc.
    If geoxeon is of massive help to your research, I and my NSF grant would appreciate acknowledgements in your papers.