PAR Syllabus

1   Catalog info

Title: ECSE-4740-01 Applied Parallel Computing for Engineers, CRN 74971
Semesters: Spring term annually
Credits: 3 credit hours
Time and place: Mon and Thurs noon-1:20pm, JEC6314 (note the change).

2   Description

  1. This is intended to be a computer engineering course to provide students with knowledge and hands-on experience in developing applications software for affordable parallel processors. This course will cover hardware that any lab can afford to purchase. It will cover the software that, in the prof's opinion, is the most useful. There will also be some theory.
  2. The target audiences are ECSE seniors and grads and others with comparable background who wish to develop parallel software.
  3. This course will have minimal overlap with parallel courses in Computer Science. We will not teach the IBM BlueGene, because it is so expensive, nor cloud computing and MPI, because most big data problems are in fact small enough to fit on our hardware.
  4. You may usefully take all the parallel courses at RPI.
  5. This unique features of this course are as follows:
    1. Use of only affordable hardware that any lab might purchase, such as Nvidia GPUs. This is currently the most widely used and least expensive parallel platform.
    2. Emphasis on learning several programming packages, at the expense of theory. However you will learn a lot about parallel architecture.
  6. Hardware taught, with reasons:
    Multicore Intel Xeon:
      universally available and inexpensive, comparatively easy to program, powerful
    Nvidia GPU accelerator:
      widely available (Nvidia external graphics processors are on 1/3 of all PCs), very inexpensive, powerful, but harder to program. Good cards cost only a few hundred dollars.
    IBM quantum computer:
      (perhaps)
  7. Software that might be taught, with reasons:
    OpenMP C++ extension:
      widely used, easy to use if your algorithm is parallelizable, backend is multicore Xeon.
    Thrust C++ functional programming library:
      FP is nice, hides low level details, backend can be any major parallel platform.
    MATLAB: easy to use parallelism for operations that Mathworks has implemented in parallel, etc.
    CUDA C++ extension and library for Nvidia:
      low level access to Nvidia GPUs.
  8. The techniques learned here will also be applicable to larger parallel machines -- numbers 1 and 2 on the top 500 list use NVIDIA GPUs. (Number 10 is a BlueGene.)
  9. Effectively programming these processors will require in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors.

3   Prerequisite

ECSE-2660 CANOS or equivalent, knowledge of C++.

4   Instructors

4.1   Professor

W. Randolph Franklin. BSc (Toronto), AM, PhD (Harvard)

Office:

Jonsson Engineering Center (JEC) 6026

Phone:

+1 (518) 276-6077 (forwards)

Email:

frankwr@YOUKNOWTHEDOMAIN

Email is my preferred communication medium.

Non-RPI accounts are fine, but please show your name, at least in the comment field. A subject prefix of #Prob is helpful. GPG encryption is fine.

Web:

https://wrf.ecse.rpi.edu/

A quick way to get there is to google RPIWRF.

Office hours:

After each lecture, usually as long as anyone wants to talk. Also by appointment.

Informal meetings:
 

If you would like to lunch with me, either individually or in a group, just mention it. We can then talk about most anything legal and ethical.

5   Course websites

The homepage has lecture summaries, syllabus, homeworks, etc.

6   Reading material

6.1   Text

There is no required text, but the following inexpensive books may be used. I might mention others later.

  1. Sanders and Kandrot, CUDA by example. It gets excellent reviews, although it is several years old. Amazon has many options, including Kindle and renting hardcopies.
  2. Kirk and Hwu, 2nd edition, Programming massively parallel processors. It concentrates on CUDA.

One problem is that even recent books may be obsolete. For instance they may ignore the recent CUDA unified memory model, which simplifies CUDA programming at a performance cost. Even if the current edition of a book was published after unified memory was released, the author might not have updated the examples.

6.2   Web

There is a lot of free material on the web, which I'll reference class by class. Because web pages are vanish so often (really!), I may cache some locally. If interested, you might start here:

https://hpc.llnl.gov/training/tutorials

7   Computer systems used

This course will primarily use (remotely via ssh) parallel.ecse.rpi.edu.

Parallel has:

  1. a dual 14-core Intel Xeon E5-2660 2.0GHz
  2. 256GB of DDR4-2133 ECC Reg memory
  3. Nvidia GPU, perhaps GeForce GTX 1080 processor with 8GB
  4. Intel Xeon Phi 7120A
  5. Samsung Pro 850 1TB SSD
  6. WD Red 6TB 6GB/s hard drive
  7. CUDA
  8. OpenMP 4.0
  9. Thrust
  10. Ubuntu 16.04

Material for the class is stored in /parallel-class/ .

We may also use geoxeon.ecse.rpi.edu. Geoxeon has:

  1. Dual 8-core Intel Xeon E5-2687W 3.1Ghz 8.0GT/s 20mb 150w.
  2. 128GB DRAM.
  3. Nvidia GPUs: #. GM200 GeForce GTX Titan X #. GK100GL Tesla K20Xm
  4. Ubuntu 18.04.1
  5. CUDA, Thrust, OpenMP, etc.

We may also use a parallel virtual machine on the Amazon EC2. If so, you will be expected to establish an account. I expect the usage to be in the free category.

8   Assessment measures, i.e., grades

  1. There will be no exams.

  2. The grade will be based on a term project and class presentations.

  3. Optional ungraded homeworks will be assigned, with the solutions discussed in a later class.

  4. Deliverables for the term project:

    1. A 2-minute project proposal given to the class around the middle of the semester.
    1. A 10-minute project presentation given to the class in the last week.
    1. Some progress reports.
    1. A write-up uploaded on the last class day. This will contain an academic paper, code and perhaps video or user manual.

8.1   Term project

  1. For the latter part of the course, most of your homework time will be spent on a term project.
  2. You are encouraged do it in teams of up to 3 people. A team of 3 people would be expected to do twice as much work as 1 person.
  3. You may combine this with work for another course, provided that both courses know about this and agree. I always agree.
  4. If you are a grad student, you may combine this with your research, if your prof agrees, and you tell me.
  5. You may build on existing work, either your own or others'. You have to say what's new, and have the right to use the other work. E.g., using any GPLed code or any code on my website is automatically allowable (because of my Creative Commons licence).
  6. You will implement, demonstrate, and document something vaguely related to parallel computing.
  7. You will give a 15 minute talk and demo in class.

8.1.1   Size of term project

It's impossible to specify how many lines of code makes a good term project. E.g., I take pride in writing code that is can be simultaneously shorter, more robust, and faster than some others. See my 8-line program for testing whether a point is in a polygon: Pnpoly.

According to Big Blues, when Bill Gates was collaborating with around 1980, he once rewrote a code fragment to be shorter. However, according to the IBM metric, number of lines of code produced, he had just caused that unit to officially do negative work.

8.1.2   Deliverables

  1. An implementation showing parallel computing.
  2. An extended abstract or paper on your project, written up like a paper. You should follow the style guide for some major conference (I don't care which, but can point you to one).
  3. A more detailed manual, showing how to use it.
  4. A talk in class.

8.2   Early warning system (EWS)

As required by the Provost, we may post notes about you to EWS, for example, if you're having trouble doing homeworks on time, or miss an exam. E.g., if you tell me that you had to miss a class because of family problems, then I may forward that information to the Dean of Students office.

9   Academic integrity

See the Student Handbook for the general policy. The summary is that students and faculty have to trust each other. After you graduate, your most important possession will be your reputation.

Specifics for this course are as follows.

  1. You may collaborate on homeworks, but each team of 1 or 2 people must write up the solution separately (one writeup per team) using their own words. We willingly give hints to anyone who asks.
  2. The penalty for two teams handing in identical work is a zero for both.
  3. You may collaborate in teams of up to 3 people for the term project.
  4. You may get help from anyone for the term project. You may build on a previous project, either your own or someone else's. However you must describe and acknowledge any other work you use, and have the other person's permission, which may be implicit. E.g., my web site gives a blanket permission to use it for nonprofit research or teaching. You must add something creative to the previous work. You must write up the project on your own.
  5. However, writing assistance from the Writing Center and similar sources in allowed, if you acknowledge it.
  6. The penalty for plagiarism is a zero grade.
  7. Cheating will be reported to the Dean of Students Office.

10   Student feedback

Since it's my desire to give you the best possible course in a topic I enjoy teaching, I welcome feedback during (and after) the semester. You may tell me or write me, or contact a third party, such as Prof James Lu, the ECSE undergrad head, or Prof John Wen, the ECSE Dept head.