Skip to main content

PAR Class 26, Mon 2022-04-18

1 Final project

1.1 Talks today

  1. Phu TT, Mark B

  2. Ivan H

  3. Luoyan Z

  4. Sissi J

1.2 Prior work

If your final project is building on, or sharing with, another course or project (say on GitHub), then you must give the details, and say what's new for this course.

1.3 Paper format

Try to use the IEEE conference format . It allows either latex or MS word. Submit the PDF paper to gradescope.

1.4 Deliverables

from the syllabus (modified).

  1. An implementation showing parallel computing.

  2. An extended abstract or paper on your project, written up like a paper. You should follow the style guide for some major conference (I don't care which, but can point you to one).

  3. A more detailed manual, showing how to use it.

  4. A 10-minute project presentation.

  5. A write-up uploaded on the last class day. This will contain an academic paper, code and perhaps video or user manual.

2 No lecture on Thurs

I'll assign some good things to watch.

3 Software tips

3.1 Git

Git is good to simultaneously keep various versions. Here's a git intro:

Create a dir for the project:

mkdir PROJECT; cd PROJECT

Initialize:

git init

Create a branch (you can do this several times):

git branch MYBRANCHNAME

Go to a branch:

git checkout MYBRANCHNAME

Do things:

vi, make, ....

Save it:

git add .; git commit -mCOMMENT

Repeat

At times I've used this to modify a program for class while keeping a copy of the original.

3.2 Freeze decisions early: SW design paradigm

One of my rules is to push design decisions to take effect as early in the process execution as possible. Constructing variables at compile time is best, at function call time is 2nd, and on the heap is worst.

  1. If I have to construct variables on the heap, I construct few and large variables, never many small ones.

  2. Often I compile the max dataset size into the program, which permits constructing the arrays at compile time. Recompiling for a larger dataset is quick (unless you're using CUDA).

    Accessing this type of variable uses one less level of pointer than accessing a variable on the heap. I don't know whether this is faster with a good optimizing compiler, but it's probably not slower.

  3. If the data will require a dataset with unpredictably sized components, such as a ragged array, then I may do the following.

    1. Read the data once to accumulate the necessary statistics.

    2. Construct the required ragged array.

    3. Reread the data and populate the array.

Update: However, with CUDA, maybe managed variables must be on the heap.

4 Exascale computing

https://www.nextplatform.com/2019/12/04/openacc-cozies-up-to-c-c-and-fortran-standards/

https://www.nextplatform.com/2019/01/09/two-thirds-of-the-way-home-with-exascale-programming/

They agree with me that fewer bigger parallel processors are better than many smaller processors connected with MPI. I.e., they like my priorities in this course.

5 CPPCON

CppCon is the annual, week-long face-to-face gathering for the entire C++ community. https://cppcon.org/

https://cppcon2018.sched.com/

CppCon 2014: Herb Sutter "Paying for Lunch: C++ in the ManyCore Age" https://www.youtube.com/watch?v=AfI_0GzLWQ8

CppCon 2016: Combine Lambdas and weak_ptrs to make concurrency easy (4min) https://www.youtube.com/watch?v=fEnnmpdZllQ

A Pragmatic Introduction to Multicore Synchronization by Samy Al Bahra. https://www.youtube.com/watch?v=LX4ugnzwggg

CppCon 2018: Tsung-Wei Huang “Fast Parallel Programming using Modern C++” https://www.youtube.com/watch?v=ho9bqIJkvkc&list=PLHTh1InhhwT7GoW7bjOEe2EjyOTkm6Mgd&index=13

CppCon 2018: Anny Gakhokidze “Workflow hacks for developers” https://www.youtube.com/watch?v=K4XxeB1Duyo&list=PLHTh1InhhwT7GoW7bjOEe2EjyOTkm6Mgd&index=33

CppCon 2018: Bjarne Stroustrup “Concepts: The Future of Generic Programming (the future is here)” https://www.youtube.com/watch?v=HddFGPTAmtU

CppCon 2018: Herb Sutter “Thoughts on a more powerful and simpler C++ (5 of N)” https://www.youtube.com/watch?v=80BZxujhY38

CppCon 2018: Jefferson Amstutz “Compute More in Less Time Using C++ Simd Wrapper Libraries” https://www.youtube.com/watch?v=80BZxujhY38

CppCon 2018: Geoffrey Romer “What do you mean "thread-safe"?” https://www.youtube.com/watch?v=s5PCh_FaMfM

6 Supercomputing conference

https://supercomputing.org/

"The International Conference for High Performance Computing, Networking, Storage, and Analysis"

7 Jack Dongarra

He has visited RPI more than once.

An Overview of High Performance Computing and Future Requirements Jul 17, 2021, (35:03)

Abstract: In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.

About the Speaker: Jack Dongarra holds an appointment at the University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester. He specializes in numerical algorithms in linear algebra, parallel computing, use of advanced-computer architectures, programming methodology, and tools for parallel computers. He was awarded the IEEE Sid Fernbach Award in 2004; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE Charles Babbage Award; in 2013 he received the ACM/IEEE Ken Kennedy Award; in 2019 he received the ACM/SIAM Computational Science and Engineering Prize, and in 2020 he received the IEEE Computer Pioneer Award. He is a Fellow of the AAAS, ACM, IEEE, and SIAM and a foreign member of the Russian Academy of Science, a foreign member of the British Royal Society, and a member of the US National Academy of Engineering.

8 12 Ways to Fool the Masses with Irreproducible Results

https://lorenabarba.com/news/keynote-at-the-35th-international-parallel-and-distributed-processing-symposium-ipdps/

Lorena Barba IPDPS21 keynote May 20, 2021 (37:11)

Keynote at the IEEE International Parallel and Distributed Processing Symposium, May 19, 2021

Abstract Thirty years ago, David Bailey published a humorous piece in the Supercomputing Review magazine, listing 12 ways of presenting results to artificially boost performance claims. That was at a time when the debate was between Cray "two-oxen" machines versus parallel "thousand-chickens" systems, when parallel standards (like MPI) were still unavailable, and the Top500 list didn't yet exist. In the years since, David and others updated the list of tricks a few times, notably in 2010–11 (when the marketing departments of Intel and Nvidia were really going at each other) Georg Hager in his blog and Scott Pakin in HPC Wire. Heterogeneity of computing systems has only escalated in the last decade, and many remiss reporting tactics continue unabated. Alas, two new ingredients have entered into the mix: wide adoption of machine learning techniques both in the science applications and systems research; and a swell of concern over reproducibility and replicability. My talk will be a new twist on the 12 ways to fool the masses, focusing on how researchers in computational science and high-performance computing miss the mark when conducting or reporting their results with poor reproducibility. By showcasing in a lighthearted manner a set of anti-patterns, I aim to encourage us to see the value and commit to adapting our practice to achieve more trustworthy scientific evidence with high-performance computing.

There's a link to the slides there.

9 Course survey

I see my role as a curator, selecting the best stuff to present to you. Since I like the topic, and asked to be given permission to create this course, I pick things I like since you might like them too.

So, if you liked the course, then please officially tell RPI by completing the survey and saying what you think. Thanks.

10 After the semester

I'm open to questions and discussions about any legal ethical topic. Even after you graduate.

PAR Class 25, Thurs 2022-04-14

1 RPI and LANL

RENSSELAER CENTER FOR MATERIALS, DEVICES & INTEGRATED SYSTEMS (cMDIS), SCHOOL OF ENGINEERING, AND SCHOOL OF SCIENCE

In Partnership With

LOS ALAMOS NATIONAL LABORATORY INSTITUE FOR MATERIALS SCIENCE

Research Nucleation Workshop

Quantum Materials and Devices

04/29/2022, 2.30-5.00 pm EST, to be held on Webex

It's not certain whether students are invited. However, if interested, ask me.

2 Video that didn't play

https://www.nvidia.com/gtc/session-catalog/?tab.scheduledorondemand=1583520458947001NJiE&search.primarytopic=16246413645860102sI4&search.primarytopic=162464136458601222RK&search.primarytopic=16246413645860132WWK#/session/1639090254404001vvDc

Only the part from 40:45 to 41:20

Ean S

3 Final project talks

Everyone who requested a specific data got it. I assigned the others randomly.

3.1 April 18

  1. Phu TT, Mark B

  2. Ivan H

  3. Luoyan Z

  4. Sissi J

3.2 April 25

  1. Adrian-James G, Kevin L, Ryan W

  2. Cohen D, Zach O

  3. Amanda M

  4. Reagan W, Ean S

  5. Tom P, Allan N

  6. Saaif A

4 Quantum computing, ctd

4.1 HHL algorithm to solve a linear system of equations

  1. Quantum Machine Learning - 37 - Overview of the HHL Algorithm 5:48.

    quick, deep, intro.

  2. Quantum algorithm for solving linear equations 36:31.

    quite understandable, but no time.

    1. HHL Algorithm

    This is in Huawei HiQ, an open-source software framework for quantum computing.

  3. https://en.wikipedia.org/wiki/Quantum_algorithm_for_linear_systems_of_equations

4.2 Misc

  1. What is an Ion Trap Quantum Computer? Simon Benjamin (2:30).

  2. What is Quantum Annealing?, D-Wave Systems (6:14).

    Elementary but sets the stage.

  3. How The Quantum Annealing Process Works, D-Wave Systems (6:09)

5 Daily joke

Smart: Guy Creates An AI Clone Of Himself To Sit In On Zoom Meetings So He Doesn't Have To

6 Inspiration for finishing your term projects

  1. The Underhanded C Contest

    "The goal of the contest is to write code that is as readable, clear, innocent and straightforward as possible, and yet it must fail to perform at its apparent function. To be more specific, it should do something subtly evil. Every year, we will propose a challenge to coders to solve a simple data processing problem, but with covert malicious behavior. Examples include miscounting votes, shaving money from financial transactions, or leaking information to an eavesdropper. The main goal, however, is to write source code that easily passes visual inspection by other programmers."

  2. The International Obfuscated C Code Contest

  3. https://www.awesomestories.com/asset/view/Space-Race-American-Rocket-Failures

    Moral: After early disasters, sometimes you can eventually get things to work.

  4. The 'Wrong' Brothers Aviation's Failures (1920s)

  5. Early U.S. rocket and space launch failures and explosion

PAR Class 24, Mon 2022-04-11

1 Student presentation today

Reagan W, Ean S, Tom P, and Allan N: computer vision

2 No lecture on April 21

Reading/viewing material will be assigned.

3 Final project presentations

April 18 or 25. Email me your preferred date. FCFS.

4 Quantum computing, ctd

We'll watch some in class.

4.1 Shor's algorithm

  1. Shor on, what is Shor's factoring algorithm? (2:09)

  2. Shor's Factoring Algorithm by Umesh Vazirani (25:42)

    This is an excellent video, which there's no time to show.

    This video is difficult because the topic is difficult, but important. At least you'll get a sense of it. This is what potentially breaks a lot of cryptosystems.

  3. 7. Shor's Algorithm I: Understanding Quantum Fourier Transform, Quantum Phase Estimation - Part 1 by Abraham Asfaw, IBM. (44:05).

    This is from IBM's Introduction to Quantum Computing and Quantum Hardware. Lecture Notes and Labs: https://qiskit.org/learn/intro-qc-qh

    This is the 1st of 5 videos describing the algorithm. There's no time for the rest, but this will give you an idea.

    RPI's quantum group, including me, visited Asfaw last spring just before everything shut down.

4.2 HHL algorithm to solve a linear system of equations

  1. Quantum Machine Learning - 37 - Overview of the HHL Algorithm 5:48.

    quick, deep, intro.

  2. Quantum algorithm for solving linear equations 36:31.

    quite understandable, but no time.

    1. HHL Algorithm

    This is in Huawei HiQ, an open-source software framework for quantum computing.

  3. https://en.wikipedia.org/wiki/Quantum_algorithm_for_linear_systems_of_equations

5 Using IBM's quantum computer

https://quantum-computing.ibm.com/

6 Using various quantum computers

  1. https://arcb.csc.ncsu.edu/~mueller/qc/qc-tut/

  2. https://quantumcomputingreport.com/tools/

  3. https://quantum-computing.ibm.com/

  4. https://aws.amazon.com/braket/

PAR Class 22, Mon 2022-04-04

1 Newegg quantum computer

THE WORLD'S SMALLEST PC?! "The Dot" Coming Soon From Newegg (3:01)

2 Student presentations today

  1. Adrian-James G: IoT/5G

  2. Saaif A: Accelerating Distributed Reinforcement Learning

  3. Ryan W: Accelerated Vehicle Routing Algorithms [S41708]

  4. Cohen D: Accelerating Automobile Noise and Vibration Analysis Using GPUs

  5. Kevin L: Accelerate Your AI and HPC Journey on Google Cloud (Presented by Google Cloud) [S42583].

PAR Class 21, Thu 2022-03-31

1 Grades

Homeworks up through 6 have been graded.

A course grade up to Mar 30 was calculated, weighting all homeworks the same, and uploaded to LMS.

You may convert it to a letter grade as follows:

90: A, 85: A-, 80: B+, 75: B, 70: B-, 65: C+, 60: C, 55: C-, 50: D+, 45: D, 0: F

2 Student presentations

2.1 Today

  1. Mark B and Phu T.T: GTC Cybersecurity

  2. Zach O: Advanced Technologies and Techniques for Debugging CUDA GPU Applications [S41252].

  3. Amanda M: Autonomous Vehicles

2.2 Mon 4/4

  1. Adrian-James G: IoT/5G

  2. Saaif A: Accelerating Distributed Reinforcement Learning

  3. Ryan W: Accelerated Vehicle Routing Algorithms [S41708]

  4. Cohen D: Accelerating Automobile Noise and Vibration Analysis Using GPUs

2.3 Thurs 4/7

  1. Reagan W, Ean S, Tom P, and Allan N: computer vision

3 Quantum computing, 3

continuing this, the 3rd major topic listed in the syllabus (after Multicore Intel Xeon and Nvidia GPU accelerator).

4 Looking for a term project topic?

  1. investigate some other widely used parallel tool, like Kokkos or Intel TBB, or parallel STL or the parallel primitives in C++20.

  2. try NVIDIA's ML tools.

  3. try a parallel VM on Amazon ECC and compare it to parallel.ecse.

PAR Class 20, Mon 2022-03-28

Table of contents

1 Quantum computing, 2

  1. My quantum computing summary .

    We'll do parts of it from time to time.

  2. I present quantum computing for computer scientists, mostly on a level above the physics.

  3. Videos. They present the subject better than I can. They show you leaders in the field. My value is in selecting the best ones for you.

    1. Visualization of superposition | QuTech Academy (0:44)

    2. David Deutsch - Why is the Quantum so Strange? (8:43)

    3. Quantum Computing for Computer Scientists (1:28:30), the 1st 1/2 hour or so.

    4. Quantum Algorithms (2:52)

      Which problems can quantum computers solve exponentially faster than classical computers? David Gosset, IBM quantum computing research scientist, explains why algorithms are key to finding out.

    5. Quantum Computing 2021 Update (13:12)

    6. Quantum Computing 2020 Update 11:58.

    7. Quantum Computing Concepts – Quantum Hardware (3:22)

    8. https://www.telegraph.co.uk/technology/2020/09/02/britain-must-act-fast-prevent-brain-drain-quantum-computing/ - good summary of programs around the world.

    9. The Map of Quantum Physics (21:17), Jul 31, 2020.

PAR Class 19, Thu 2022-03-24

1 Driver/library version mismatch error

was caused by my updating some packages but not rebooting, causing new libraries to try to use the old kernel driver.

2 Thrust, 5

  1. Examples

    1. bucket_sort2d.cu overlays a grid on a set of 2D points and finds the points in each grid cell (bucket).

      1. The tuple is an efficient class for a short vector of fixed length.

      2. Note how random numbers are generated. You combine an engine that produces random output with a distribution.

        However you might need more complicated coding to make the numbers good when executing in parallel. See monte_carlo_disjoint_sequences.cu.

      3. The problem is that the number of points in each cell is unpredictable.

      4. The cell containing each point is computed and that and the points are sorted to bring together the points in each cell.

      5. Then lower_bound and upper_bound are used to find each bucket in that sorted vector of points.

      6. See this lower_bound description.

    2. mode.cu shows:

      1. Counting the number of unique keys in a vector.

        1. Sort the vector.

        2. Do an inner_product. However, instead of the operators being times and plus, they are not equal to the next element and plus.

      2. Counting their multiplicity.

        1. Construct vectors, sized at the number of unique keys, to hold the unique keys and counts.

        2. Do a reduce_by_keys on a constant_iterator using the sorted vector as the keys. For each range of identical keys, it sums the constant_iterator. That is, it counts the number of identical keys.

        3. Write a vector of unique keys and a vector of the counts.

      3. Finding the most used key (the mode).

        1. Do max_element on the counts vector.

    3. repeated_range.cu repeats each element of an N-vector K times: repeated_range([0, 1, 2, 3], 2) -> [0, 0, 1, 1, 2, 2, 3, 3]. It's a lite version of expand.cu, but uses a different technique.

      1. Here, N=4 and K=2.

      2. The idea is to construct a new iterator, repeated_range, that, when read and incremented, will return the proper output elements.

      3. The construction stores the relevant info in structure components of the variable.

      4. Treating its value like a subscript in the range [0,N*K), it divides that value by K and returns that element of its input.

      See also strided_range.cu and tiled_range.cu.

2.1 Backends

  1. The Thrust device can be CUDA, OpenMP, TBB, etc.

  2. You can spec it in 2 ways:

    1. by adding an extra arg at the start of a function's arg list.

    2. with an envar

      https://github.com/thrust/thrust/wiki/Host-Backends

      https://github.com/thrust/thrust/wiki/Device-Backends

3 Linux HMM (Heterogeneous Memory Management)

This is hardware support to make programming devices like GPUs easier. It took years to get into the official kernel, presumably because you don't want bugs in memory management.

This is also a nice intro into current operating systems concerns. Do our operating systems courses talk about this?

  1. https://www.kernel.org/doc/html/v4.18/vm/hmm.html

  2. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=2ahUKEwiCuM-y2t7gAhUD24MKHZWlCeYQFjAEegQIBhAC&url=http%3A%2F%2Fon-demand.gputechconf.com%2Fgtc%2F2017%2Fpresentation%2Fs7764_john-hubbardgpus-using-hmm-blur-the-lines-between-cpu-and-gpu.pdf&usg=AOvVaw1c7bYo2YO5n8OtD0Vw9hbs

4 Final projects

  1. See the syllabus.

  2. April 4: team and title.

  3. April 11: 100 word project report.

  4. April 18, 25: 10 minute presentations. Email me with preferred date.

  5. Wed Apr 27: report due.

5 Quantum computing, 1

My quantum computing summary .

6 No classes

April 7, 21. If you wish, I can assign some excellent quantum computing videos to watch.

PAR Homework 8, Student talk, 2022-03-28 to 2022-04-11.

Student talks - 2nd set

  1. Rules:

    1. Any number of students may work together to present one talk.

    2. Prepare and give 5-10 minute talk on Mon 3/28, Thurs 3/30, Mon 4/4, or Mon 4/11. A team would jointly give one 5-10 minute talk. I'll pack six talks per day, filling the earliest days first, and fill out unused time with new material.

    3. Email me with your team, topic, and preferred dates.

  2. Your topic may be anything from GTC 2022 or a comparable conference.

PAR Class 18, Mon 2022-03-21

1 Trivia question

Today is the spring equinox. However, today in Albany, sunrise is at 6:56am, sunset 7:08pm. That makes the day 12 hours and 12 minutes long, not 12 hours. Why?

2 Thrust - 4

2.1 Use placeholder and lambdas in transform

See saxpy2 and saxpy3 in files/nvidia/thrust/rpi/

Continue examples from last class.

PAR Class 17, Thu 2022-03-17

1 Thrust - 3

  1. I'll lecture from .../files/nvidia/thrust/doc/GTC_2010_Part_2_Thrust_By_Example.pdf on parallel.ecse starting around slide 23.

  2. Nvidia's thrust examples are now in /parclass/2022/files/nvidia/thrust/examples .

  3. The github repository is here: https://github.com/NVIDIA/thrust

    It references https://godbolt.org a web-based compiler explorer.

1.1 CUDACast videos on Thrust

I won't cover these in class; they're presented in case you're interested.

  1. CUDACast #.15 - Introduction to Thrust

  2. CUDACast #.16 - Thrust Algorithms and Custom Operators

1.2 Examples

The point of these examples is to show you some techniques for parallel programming that are not obvious. I.e., you probably wouldn't think of them if you did not already know them.

  1. I rewrote thrust/examples-1.8/tiled_range.cu into thrust/rpi/tiled_range2.cu .

    It is now much shorter and much clearer. All the work is done here: ..

    gather(make_transform_iterator(make_counting_iterator(0), _1%N), make_transform_iterator(make_counting_iterator(N*C), _1%N), data.begin(), V.begin());

    1. make_counting_iterator(0) returns pointers to the sequence 0, 1, 2, ...

    2. _1%N is a function computing modulo N.

    3. make_transform_iterator(make_counting_iterator(0), _1%N) returns pointers to the sequence 0%N, 1%N, ...

    4. gather populates V. The i-th element of V gets make_transform_iterator...+i element of data, i.e., the i%N-th element of data.

  2. tiled_range3.cu is even shorter. Instead of writing an output vector, it constructs an iterator for a virtual output vector: ..

    auto output=make_permutation_iterator(data, make_transform_iterator(make_counting_iterator(0), _1%N));

    1. *(output+i) is *(data+(i%N)).

    2. You can get as many tiles as you want by iterating.

  3. repeated_range2.cu is my improvement on repeated_range.cu: ..

    auto output=make_permutation_iterator(data.begin(), make_transform_iterator(make_counting_iterator(0), _1/3));

    1. make_transform_iterator(make_counting_iterator(0), _1/3)) returns pointers to the sequence 0,0,0,1,1,1,2,2,2, ...

  4. Unmodified thrust examples:

    1. expand.cu takes a vector like V= [0, 10, 20, 30, 40] and a vector of repetition counts, like C= [2, 1, 0, 3, 1]. Expand repeats each element of V the appropriate number of times, giving [0, 0, 10, 30, 30, 30, 40]. The process is as follows.

      1. Since the output vector will be longer than the input, the main program computes the output size, by reduce summing C, and constructs a vector to hold the output.

      2. Exclusive_scan C to obtain output offsets for each input element: C2 = [0, 2, 3, 3, 6].

      3. Scatter_if the nonzero counts into their corresponding output positions. A counting iterator, [0, 1, 2, 3, 4] is mapped with C2, using C as the stencil, giving C3 = [0, 0, 1, 3, 0, 0, 4].

      4. An inclusive_scan with max fills in the holes in C3, to give C4 = [0, 0, 1, 3, 3, 3, 4].

      5. Gather uses C4 to gather elements of V: [0, 0, 10, 30, 30, 30, 40].

    2. set_operations.cu. This shows methods of handling an operation whose output is of unpredictable size. The question is, is space or time more important?

      1. If the maximum possible output size is reasonable, then construct an output vector of that size, use it, and then erase it down to its actual size.

      2. Or, run the operation twice. The 1st time, write to a discard_iterator, and remember only the size of the written data. Then, construct an output vector of exactly the right size, and run the operation again.

        I use this technique a lot with ragged arrays in sequential programs.

    3. sparse_vector.cu represents and sums sparse vectors.

      1. A sparse vector has mostly 0s.

      2. The representation is a vector of element indices and another vector of values.

      3. Adding two sparse vectors goes as follows.

        1. Allocate temporary index and element vectors of the max possible size (the sum of the sizes of the two inputs).

        2. Catenate the input vectors.

        3. Sort by index.

        4. Find the number of unique indices by applying inner_product with addition and not-equal-to-next-element to the indices, then adding one.

          E.g., applied to these indices: [0, 3, 3, 4, 5, 5, 5, 8], it gives 5.

        5. Allocate exactly enough space for the output.

        6. Apply reduce_by_key to the indices and elements to add elements with the same keys.

          The size of the output is the number of unique keys.

  5. What's the best way to sort 16000 sets of 1000 numbers each? E.g., sort the rows of a 16000x1000 array? On geoxeon, /pc/thrust/rpi/comparesorts.cu, which I copied from http://stackoverflow.com/questions/28150098/how-to-use-thrust-to-sort-the-rows-of-a-matrix|stackoverflow, compares three methods.

  6. Call the thrust sort 16000 times, once per set. That took 10 secs.

  7. Sort the whole list of 16,000,000 numbers together. Then sort it again by key, with the keys being the set number, to bring the elements of each set together. Since the sort is stable, this maintains the order within each set. (This is also how radix sort works.) That took 0.04 secs.

  8. Call a thrust function (to sort a set) within another thrust function (that applies to each set). This is new in Thrust 1.8. That took 0.3 secs.

This is a surprising and useful paradigm. It works because

  1. There's an overhead to starting each thrust function, and

  2. Radix sort, which thrust uses for ints, takes linear time.

  3. The Thrust examples teach several non-intuitive paradigms. As I figure them out, I'll describe a few. My descriptions are modified and expanded versions of the comments in the programs. This is not a list of all the useful programs, but only of some where I am adding to their comments.

    1. arbitrary_transformation.cu and dot_products_with_zip.cu. show the very useful zip_iterator. Using it is a 2-step process.

      1. Combine the separate iterators into a tuple.

      2. Construct a zip iterator from the tuple.

      Note that operator() is now a template.

    2. boundingbox.cu finds the bounding box around a set of 2D points.

      The main idea is to do a reduce. However, the combining operation, instead of addition, is to combine two bounding boxes to find the box around them.

      The combining op can be any associative op.

2 Homework 7

is online.

PAR Homework 7, due Thu 2022-03-24

Play with Thrust, lambdas, and placeholder notation

Modify some Thrust example programs use lambdas and placeholder notation. Good choices might be saxpy, repeated_range, bounding_box, or simple_moving_average.

Teams are ok.

PAR Class 16, Mon 2022-03-14

Table of contents

1 Thrust - 2

  1. I'll lecture from .../files/nvidia/thrust/doc/GTC_2010_Part_2_Thrust_By_Example.pdf on parallel.ecse

    Why something from 2010? It's well presented, hasn't been updated by NVidia, and is current, apart from not using lambdas.

    The deeper meaning of this stuff is the set of parallel programming and algorithm paradigms.

  2. On parallel in .../files/nvidia/thrust are many little demo programs from the thrust distribution, with my additions.

  3. Thrust is fast because the functions that look like they would need linear time really take only log time in parallel.

  4. In functions like reduce and transform, you often see an argument like thrust::multiplies<float>(). The syntax is as follows:

    1. thrust::multiplies<float> is a class.

    2. It overloads operator().

    3. However, in the call to reduce, thrust::multiplies<float>() is calling the default constructor to construct a variable of class thrust::multiplies<float>, and passing it to reduce.

    4. reduce will treat its argument as a function name and call it with an argument, triggering operator().

    5. You may also create your own variable of that class, e.g., thrust::multiplies<float> foo. Then you just say foo in the argument list, not foo().

    6. The optimizing compiler will replace the operator() function call with the defining expression and then continue optimizing. So, there is no overhead, unlike if you passed in a pointer to a function.

  5. Sometimes, e.g., in saxpy.cu, you see saxpy_functor(A).

    1. The class saxpy_functor has a constructor taking one argument.

    2. saxpy_functor(A) constructs and returns a variable of class saxpy_functor and stores A in the variable.

    3. The class also overloads operator().

    4. (Let's call the new variable foo). foo() calls operator() for foo; its execution uses the stored A.

    5. Effectively, we did a closure of saxpy_functor; this is, we bound a property and returned a new, more restricted, variable or class.

1.1 Bug

I found and reported a bug in version 100904. This version does not work with OpenMP. It was immediately closed because they already knew about it. They just hadn't told us users.

Awhile ago I reported a bug in nvcc, where it went into an infinite loop for a certain array size. The next minor version of CUDA was released the next day.

Two observations:

  1. I'm good at breaking SW by expecting it to meet its specs.

  2. Nvidia is responsive, which I like.

PAR Class 15, Thu 2022-03-03

1 NVIDIA GTC conference

  1. https://www.nvidia.com/gtc/

  2. March 21-24.

  3. mostly free.

  4. I encourage you to browse around.

  5. There'll probably be a homework or 2 based on this.

2 Types of virtualization

  1. There are many possible levels of virtualization.

  2. At a low level, one might emulate the HW. This is quite flexible but too slow.

  3. At a higher level, a basic OS runs separate virtual machines, each with its own file system.

    1. Harmless machine instructions execute normally.

    2. Powerful ones are trapped and emulated.

    3. This requires a properly designed instruction set.

    4. IBM has been doing this commercially for 40 years, with something originally called CP/CMS.

      I think that IBM lucked out with their instruction set design, and didn't plan it.

    5. Well-behaved clients might have problematic code edited before running, to speed the execution.

      I think that Vmware does that.

    6. It seems that compute-intensive clients might have almost no overhead.

    7. However, the emulated file system can be pretty slow.

    8. With Vmware, several clients can all be different OSs, and the host can be any compatible OS.

    9. E.g., I've had a linux vmware host simultaneously running both linux and windows clients.

  4. In linux, root no longer has infinite power.

  5. The next level of virtualization has an nontrivial host OS, but separates the clients from each other.

    1. They see a private view of the process space, file system, and other resources.

    2. This is lighter weight, e.g., quicker to start a VM and less overhead.

    3. The host and client must be the same OS.

    4. This might be called paravirtualization.

    5. Linux supports this with things like namespace isolation and control groups (cgroups). Wikipedia et al describe this.

    6. Ubuntu snaps do something like this.

      E.g., firefox is distributed as a snap to increase isolation and security.

      However starting firefox now takes 15 sec.

  6. The next level up is the normal linux security.

    1. You can see all the processes and similar resources.

    2. The file system has the usual protections.

    3. This is hard to make secure when doing something complicated.

    4. How do I protect myself from firefox going bad?

    5. It's easy to describe what it should be allowed to do, but almost impossible to implement.

    6. That includes using apparmor etc.

  7. Who guards the guards? I get spammed at a unique address that I used only to register with apparmor.

  8. In theory, packaging an app in a virtual machine has fewer dependencies and is more secure.

  9. You can run the vm w/o changes on different hosts.

  10. A Vmware client can run w/o change on both linux and windows hosts.

  11. You can run a client on your own hardware, then spill over to a commercial cloudy platform when necessary.

3 Docker

  1. Docker is a popular lightweight virtualization system, which Nvidia uses to distribute SW.

  2. Docker runs images that define virtual machines.

  3. Docker images share resources with the host, in a controlled manner.

  4. For simple images, which is not nvidia/cuda, starting the image is so cheap that you can do it to run one command, and encapsulate the whole process in a shell function.

  5. Docker is worth learning, apart from its use by Nvidia for parallel computing. You might also look up Kubernetes.

  6. More info:

    1. https://www.docker.com/

    2. https://www.zdnet.com/article/what-is-docker-and-why-is-it-so-darn-popular/

    3. https://opensource.com/resources/what-docker

  7. I installed docker on parallel to run nvidia images like pgc++. Then I removed it because it wasn't necessary, was complicated, and it was insecure.

4 Several forms of C++ functions

  1. Traditional top level function

    auto add(int a, int b) { return a+b;}

    You can pass this to a function. This really passes a pointer to the function. It doesn't optimize across the call.

    These have global scope.

  2. Note auto. It's underused.

  3. Overload operator() in a new class

    Each different variable of the class is a different function. The function can use the variable's value. This is a closure.

    This is local to the containing block.

    This form optimizes well.

  4. Lambda, or anon function.

    auto add = [](int a, int b) { return a+b;};

    This is local to the containing block.

    This form optimizes well.

  5. Placeholder notation.

    As an argument in, e.g., thrust transform, you can do this:

    transform(..., _1+_2);

    This is nice and short.

    As this is implemented by overloading the operators, the syntax of the expression is limited to what was overloaded.

5 Thrust - 1

  1. Thrust is an API that looks like STL. Its backend can be GPU-CUDA, OpenMP, TBB, or sequential host-based code.

  2. Functional-programming philosophy.

  3. Easier programming, once you get used to it.

  4. Source code is compact.

  5. Compiled code runs efficiently.

  6. Uses some unusual C++ techniques, like overloading operator().

  7. Since the Stanford slides were created, Thrust has adopted unified addressing, so that pointers know whether they are host or device.

  8. Links:

    1. https://docs.nvidia.com/cuda/thrust/index.html NVIDIA API

    2. https://developer.nvidia.com/thrust Developer doc

    3. https://thrust.github.io/

    4. https://github.com/NVIDIA/thrust

    5. CUDA Tutorial 11: Thrust Library Intro 19:35

    6. CUDACast #15 - Introduction to Thrust 6:23

    7. Introduction to GPU Programming with CUDA and Thrust 1:18:19

      All the preceding contains other interesting links.

6 Just using parallelism

  1. You just want to run a big application, and figure a parallel computer might be useful.

  2. You don't care for the theory.

  3. Then try to implement your application on using existing parallel apps like Matlab or Mathematica.

  4. Parallel matrix ops and Fourier transforms are already available.

PAR Homework 6, Mon 2022-02-28

Rules

  1. Due Mon 2022-03-14.

  2. Submit the answers to Gradescope.

  3. You may do homeworks in teams of 2 students. Create a gradescope team and make one submission with both names.

Question

Read the NVIDIA Ampere info linked on Class 13, and other places should you choose to look around.

Write a page on what's new, interesting, and different about Ampere.

PAR Class 14, Mon 2022-02-28

1 Nvidia GPU and accelerated computing, part 8

This is from https://developer.nvidia.com/teaching-kits-downloads

My local copy of what I'm using is in /parclass/2022/files/nvidia/GPU-Teaching-Kit-Accelerated-Computing.zip

Start at Module 18 - MPI, then 23 - Dynamic parallelism.

2 OpenCL

  1. Module 20.

  2. Apple's competition to CUDA.

  3. is largely CUDA but they changed the names and made it look like OpenGL and clunkier.

  4. not interesting so long as Nvidia is dominant.

3 Nvidia primary documentation

Generally more up-to-date and accurate, but drier. A little disorganized because it keeps growing. The root is here: https://docs.nvidia.com/

Two major relevant sets are:

  1. https://docs.nvidia.com/hpc-sdk/index.html

  2. https://docs.nvidia.com/cuda/index.html

4 Other Nvidia features

We've seen almost everything, except:

  1. Texture and surface maps.

  2. ML HW like A=BC+D for 4x4 matrices.

  3. Ray tracing HW, to compute a ray's intersections with boxes.

  4. Cooperative groups: with Ampere, subsets of a warp can synchronize.

  5. Subsets of a GPU can be defined as virtual GPUS, which are walled off from each other.

  6. Memory can be compressed when stored, making a space-time tradeff.

  7. The terminology CUDA core is obsolete. Now, they say that an SM has, perhaps 32 single float units, 32 integer units, 32 CUDA instruction dispatchers, and 16 double float units, etc. Each unit operates independently.

5 Nvidia conceptual hierarchy

As always, this is as I understand it, and could be wrong. Nvidia uses their own terminology inconsistently. They may use one name for two things (E.g., Tesla and GPU), and may use two names for one thing (e.g., module and accelerator). As time progresses, they change their terminology.

  1. At the bottom is the hardware micro-architecture. This is an API that defines things like the available operations. The last several Nvidia micro-architecture generations are, in order, Tesla (which introduced unified shaders), Fermi, Kepler, Maxwell (introduced in 2014), Pascal (2016), and Volta (2018).

  2. Each micro-architecture is implemented in several different microprocessors. E.g., the Kepler micro-architecture is embodied in the GK107, GK110, etc. Pascal is GP104 etc. The second letter describes the micro-architecture. Different microprocessors with the same micro-architecture may have different amounts of various resources, like the number of processors and clock rate.

  3. To be used, microprocessors are embedded in graphics cards, aka modules or accelerators, which are grouped into series such as GeForce, Quadro, etc. Confusingly, there is a Tesla computing module that may use any of the Tesla, Fermi, or Kepler micro-architectures. Two different modules using the same microprocessor may have different amounts of memory and other resources. These are the components that you buy and insert into a computer. A typical name is GeForce GTX1080.

  4. There are many slightly different accelerators with the same architecture, but different clock speeds and memory, e.g. 1080, 1070, 1060, ...

  5. The same accelerator may be manufactured by different vendors, as well as by Nvidia. These different versions may have slightly different parameters. Nvidia's reference version may be relatively low performance.

  6. The term GPU sometimes refers to the microprocessor and sometimes to the module.

  7. There are at least four families of modules: GeForce for gamers, Quadro for professionals, Tesla for computation, and Tegra for mobility.

  8. Nvidia uses the term Tesla in two unrelated ways. It is an obsolete architecture generation and a module family.

  9. Geoxeon has a (Maxwell) GeForce GTX Titan and a (Kepler) Tesla K20xm. Parallel has a (Volta) RTX 8000 and (Pascal) GeForce GTX 1080. We also have an unused (Kepler) Quadro K5000.

  10. Since the highest-end (Tesla) modules don't have video out, they are also called something like compute modules.

6 GPU range of speeds

Here is an example of the wide range of Nvidia GPU speeds; all times are +-20%.

The Quadro RTX 8000 has 4608 CUDA cores @ 1.77GHz and 48GB of memory. matrixMulCUBLAS runs at 5310 GFlops. The specs claim 16 TFlops. However those numbers understate its capabilities because it also has 576 Tensor cores and 72 ray tracing cores to cast 11G rays/sec.

The GeForce GTX 1080 has 2560 CUDA cores @ 1.73GHz and 8GB of memory. matrixMulCUBLAS runs at 3136 GFlops. However the reported time (0.063 msec) is so small that it may be inaccurate. The quoted speed of the 1080 is about triple that. I'm impressed that the measured performance is so close.

The Quadro K2100M in my Lenovo W540 laptop has 576 CUDA cores @ 0.67 GHz and 2GB of memory. matrixMulCUBLAS runs at 320 GFlops. The time on the GPU was about .7 msec, and on the CPU 600 msec.

It's nice that the performance almost scaled with the number of cores and clock speed.

7 CUDA

7.1 Versions

  1. CUDA has a capability version, whose major number corresponds to the micro-architecture generation. Kepler is 3.x. The K20xm is 3.5. The GTX 1080 is 6.1. The RTX 8000 is 7.5. Here is a table of the properties of different compute capabilities. However, that table is not completely consistent with what deviceQuery shows, e.g., the shared memory size.

  2. nvcc, the CUDA compiler, can be told which capabilities (aka architectures) to compile for. They can be given as a real architecture, e.g., sm_61, or a virtual architecture. e.g., compute_61.

  3. The CUDA driver and runtime also have a software version, defining things like available C++ functions. The latest is 10.1. This is unrelated to the capability version.

7.2 Misc

  1. With CUDA, the dominant problem in program optimization is optimizing the data flow. Getting the data quickly to the cores is harder than processing it. It helps big to have regular arrays, where each core reads or writes a successive entry.

    This is analogous to the hardware fact that wires are bigger (hence, more expensive) than gates.

  2. That is the opposite optimization to OpenMP, where having different threads writing to adjacent addresses will cause the false sharing problem.

  3. Nvidia CUDA FAQ

    1. has links to other Nvidia docs.

    2. can be a little old.

8 Types of memory allocation

Here's a brief overview of my understanding of the various places that you can assign memory in a program.

  1. Static. Define a fixed-size array global array. The variable is constructed at compile time, so accesses might perhaps be faster. Global vars with non default initial values increase the executable file size. If they're large enough, you need to use the compiler option -mcmodel=medium or -mcmodel=large. They cause the compiler to generate wider addresses. I don't know the effect on the program's size or speed, but suspect that it's small.

  2. Stack. Define local arrays, that are created and freed as the routine is entered and exited. Their addresses relative to the base of this call frame may be constant. The default stack size is 8MB. You can increase this with the command ulimit or in the program as shown in stacksize.cc. I believe that in OpenMP, the max stacksize may be allocated when each thread is created. Then, a really big stackssize might have a penalty.

  3. Heap. You use new and destroy. Variables are constructed whenever you want. The more objects on the heap, the more time that each new or destroy takes. If you have lots of objects consider using placement new or creating an array of them.

    For CUDA, some variables must be on the heap.

I like to use static, then stack, and heap only when necessary. However, allocating few but large, blocks on the heap is also fast.

Google's allocator is noticeably better than the default one. To use it, link your programs with -ltcmalloc. You can often use it on an existing executable foo thus:

LD_PRELOAD="/usr/lib/libtcmalloc.so" foo

I found it to save 15% to 30% in time.

Another memory concern is speed. Parallel has a NUMA (Non Uniform Memory Architecture). It has two 14-core Xeons. Each core has 128GB of main memory. Although all 256GB are in a common address space, accessing memory on same core as the thread is running on is faster.

The following is what I think based on some research, but may be wrong: A 4KB page of memory is assigned to a specific core when it is first written (not when it is reserved). So, each page of a large array may be on a different core. This can be used to optimize things. This gets more fun with 8-processor systems.

All that is separate from cache issues.

You can also assign your OpenMP threads to specific cores. This affects speed in ways I don't understand. The issues are resource sharing vs conflicts.

9 Nvidia GPU summary

Here's a summary of the Nvidia Pascal GP104 GPU architecture as I understand it. It's more compact than I've found elsewhere. I'll add to it from time to time. Some numbers are probably wrong.

  1. The host is the CPU.

  2. The device is the GPU.

  3. The device contains 20 streaming multiprocessors (SMs).

    Different GPU generations have used the terms SMX or SMM.

  4. A thread is a sequential program with private and shared memory, program counter, etc.

  5. Threads are grouped, 32 at a time, into warps.

  6. Warps of threads are grouped into blocks.

    Often the warps are only implicit, and we consider that the threads are grouped directly into blocks.

    That abstract hides details that may be important; see below.

  7. Blocks of threads are grouped into a grid, which is all the threads in the kernel.

  8. A kernel is a parallel program executing on the device.

    1. The kernel runs potentially thousands of threads.

    2. A kernel can create other kernels and wait for their completion.

    3. There may be a limit, e.g., 5 seconds, on a kernel's run time.

  9. Thread-level resources:

    1. Each thread can use up to 255 fast registers. Registers are private to the thread.

      All the threads in one block have their registers allocated from a fixed pool of 65536 registers. The more registers that each thread uses, the fewer warps in the block can run simultaneously.

    2. Each thread has 512KB slow local memory, allocated from the global memory.

    3. Local memory is used when not enough registers are available, and to store thread-local arrays.

  10. Warp-level resources:

    1. Threads are grouped, 32 at a time, into warps.

    2. Each warp executes as a SIMD, with one instruction register. At each cycle, every thread in a warp is either executing the same instruction, or is disabled. If the 32 threads want to execute 32 different instructions, then they will execute one after the other, sequentially.

      If you read in some NVidia doc that threads in a warp run independently, then continue reading the next page to get the info mentioned in the previous paragraph.

    3. If successive instructions in a warp do not depend on each other, then, if there are enough warp schedulers available, they may be executed in parallel. This is called Instruction Level Parallelism (ILP).

    4. For an array in local memory, which means that each thread will have its private copy, the elements for all the threads in a warp are interleaved to potentially increase the I/O rate.

      Therefore your program should try to have successive threads read successive words of arrays.

    5. A thread can read variables from other threads in the same warp, with the shuffle instruction. Typical operation are to read from the K-th next thread, to do a butterfly permutation, or to do an indexed read. This happens in parallel for the whole warp, and does not use shared memory.

    6. A warp vote combines a bit computed by each thread to report results like all or any.

  11. Block-level resources:

    1. A block may contain up to 1024 threads.

    2. Each block has access to 65536 fast 32-bit registers, for the use of its threads.

    3. Each block can use up to 49152 bytes of the SM's fast shared memory. The block's shared memory is shared by all the threads in the block, but is hidden from other blocks.

      Shared memory is basically a user-controllable cache of some global data. The saving comes from reusing that shared data several times after you loaded it from global memory once.

      Shared memory is interleaved in banks so that some access patterns are faster than others.

    4. Warps in a block run asynchronously and run different instructions. They are scheduled and executed as resources are available.

    5. However they are all running the same instruction sequence, perhaps at different points in it.

    6. That is call SPMD, single program multiple data.

    7. The threads in a block can be synchonized with __syncthreads().

      Because of how warps are scheduled, that can be slow.

    8. The threads in a block can be arranged into a 3D array, up to 1024x1024x64.

      That is for convenience, and does not increase performance (I think).

    9. I'll talk about textures later.

  12. Streaming Multiprocessor (SM) - level resources:

    1. Each SM has 128 single-precision CUDA cores, 64 double-precision units, 32 special function units, and 32 load/store units.

    2. In total, the GPU has 2560 CUDA cores.

    3. A CUDA core is akin to an ALU. The cores, and all the units, are pipelined.

    4. A CUDA core is much less powerful than one core of an Intel Xeon. My guess is 1/20th.

    5. Beware that, in the CUDA C Programming Guide, NVidia sometimes calls an SM a core.

    6. The limited number of, e.g., double precision units means that an DP instruction will need to be scheduled several times for all the threads to execute it. That's why DP is slower.

    7. Each SM has 4 warp schedulers and 8 instruction dispatch units.

    8. 64 warps can simultaneously reside in an SM.

    9. Therefore up to 32x64=2048 threads can be executed in parallel by an SM.

    10. Up to 16 blocks that can simultaneously be resident in an SM.

      However, if each block uses too many resources, like shared memory, then this number is reduced.

      Each block sits on only one SM; no block is split. However a block's warps are executed asynchronously (until synced).

    11. Each SM has 64KiB (?) fast memory to be divided between shared memory and an L1 cache. Typically, 48KiB (96?) is used for the shared memory, to be divided among its resident blocks, but that can be changed.

    12. The 48KB L1 cache can cache local or global memory.

    13. Each SM has a read-only data cache of 48KB to cache the global constant memory.

    14. Each SM has 8 texture units, and many other graphics capabilities.

    15. Each SM has 256KB of L2 cache.

  13. Grid-level resources:

    1. The blocks in a grid can be arranged into a 3D array. up to \((2^{31}-1, 2^{16}-1, 2^{16}-1)\).

    2. Blocks in a grid might run on different SMs.

    3. Blocks in a grid are queued and executed as resources are available, in an unpredictable parallel or serial order. Therefore they should be independent of each other.

    4. The number of instructions in a kernel is limited.

    5. Any thread can stop the kernel by calling assert.

  14. Device-level resources:

    1. There is a large and slow 48GB global memory, which persists from kernel to kernel.

      Transactions to global memory are 128 bytes.

      Host memory can also be memory-mapped into global memory, although the I/O rate will be lower.

      Reading from global memory can take hundreds of cycles. A warp that does this will be paused and another warp started. Such context switching is very efficient. Therefore device throughput stays high, although there is a latency. This is called Thread Level Parallelism (TLP) and is a major reason for GPU performance.

      That assumes that an SM has enough active warps that there is always another warp available for execution. That is a reason for having warps that do not use all the resources (registers etc) that they're allowed to.

    2. There is a 2MB L2 cache, for sharing data between SMs.

    3. There is a 64KiB Small and fast global constant memory, , which also persists from kernel to kernel. It is implemented as a piece of the global memory, made fast with caches.

      (Again, I'm still resolving this apparent contradiction).

    4. Grid Management Unit (GMU) schedules (pauses, executes, etc) grids on the device. This is more important because grids can start other grids (Dynamic Parallelism).

    5. Hyper-Q: 32 simultaneous CPU tasks can launch kernels into the queue; they don't block each other. If one kernel is waiting, another runs.

    6. CUDA Work Distributor (CWD) dispatches 32 active grids at a time to the SMs. There may be 1000s of grids queued and waiting.

    7. GPU Direct: Other devices can DMA the GPU memory.

    8. The base clock is 1607MHz.

    9. GFLOPS: 8873.

    10. Memory bandwidth: 320GB/s

  15. GPU-level resources:

    1. Being a Geforce product, there are many graphics facilities that we're not using.

    2. There are 4 Graphics processing clusters (GPCs) to do graphics stuff.

    3. Several perspective projections can be computed in parallel, for systems with several displays.

    4. There's HW for texture processing.

  16. Generational changes:

    1. With each new version, Nvidia tweaks the numbers. Some get higher, others get lower.

      1. E.g., Maxwell had little HW for double precision, and so that was slow.

      2. Pascal's clock speed is much higher.

  17. Refs:

    1. The CUDA program deviceDrv.

    2. http://developer.download.nvidia.com/compute/cuda/compute-docs/cuda-performance-report.pdf

    3. http://international.download.nvidia.com/geforce-com/international/pdfs/GeForce_GTX_1080_Whitepaper_FINAL.pdf

    4. Better Performance at Lower Occupancy, Vasily Volkov, UC Berkeley, 2010.

    5. https://www.pgroup.com/lit/articles/insider/v2n1a5.htm - well written but old.

    (I'll keep adding to this. Suggestions are welcome.)

10 More CUDA

  1. CUDA function qualifiers:

    1. __global__ device function called from host, starting a kernel.

    2. __device__ device function called from device function.

    3. __host__ (default) host function called from host function.

  2. CUDA variable qualifiers:

    1. __shared__

    2. __device__ global

    3. __device__ __managed__ automatically paged between host and device.

    4. __constant__

    5. (nothing) register if scalar, or local if array or if no more registers available.

  3. If installing CUDA on your machine, this repository seems best:

    http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64

    That includes the Thrust headers but not example programs.

11 Unionfs: Linux trick of the day

  1. aka overlay FS, translucent FS.

  2. If a, b are directories, and m is an empty directory, then

    unionfs -o cow a=RW:b m

    makes m to be a combo of a and b, with a being higher priority

  3. Writing a file into m writes it in a.

  4. Changing a file in b writes the new version into a

  5. Deleting a file in b causes a white-out note to be stored in a.

  6. Unmount it thus:

    fusermount -u m

  7. None of this requires superuser.

  8. Application: making a read-only directory into a read-write directory.

  9. Note: IBM had a commercial version of this idea in its CP/CMS OS in the 1960s.

12 Stanford's parallel course notes

  1. On parallel.ecse in /parclass/2022/files/stanford/

  2. Very well done.

  3. Duplicative of the nvidia course, so I'm not spending time on them this year.

  4. However, worth your time to scan them.

13 Thrust

Next topic.

PAR Class 13, Thu 2022-02-24

PAR Class 12, Tue 2022-02-22

1 Nvidia GPU and accelerated computing, part 6

This is from https://developer.nvidia.com/teaching-kits-downloads

My local copy of what I'm using is in /parclass/2022/files/nvidia/GPU-Teaching-Kit-Accelerated-Computing.zip

Do Module 12 – Floating-Point Considerations: Lecture 12.2 - Numerical Stability.

Then start at Module 14 – Efficient Host-Device Data Transfer: Lecture 14.1 - Pinned Host Memory

PAR Homework 5, Mon 2022-02-14

Rules

  1. Due next Tues 2022-02-22.

  2. Submit the answers to Gradescope.

  3. You may do homeworks in teams of 2 students. Create a gradescope team and make one submission with both names.

Question

  1. Use the ideas we've seen to implement a parallel text histogram program, both on the multicore CPU and on the manycore GPU. Do it with and without private subtotal arrays.

Use current tools, like managed memory on the device.

Compare the performance processing a big file. Separate the time to read the data into memory from the time to process it.

Try to use debugging tools to find hotspots.

To reduce file reading time, before starting your program, put the data file into the in-core filesystem in /dev/shm/, aka /ds .

  1. Do Chapter 9 of the free online Nvidia text, Question 4.

  2. Do Chapter 9, Question 5.

PAR Class 10, Mon 2022-02-14

1 Nvidia GPU and accelerated computing, part 4

This is from https://developer.nvidia.com/teaching-kits-downloads

My local copy of what I'm using is in /parclass/2022/files/nvidia/GPU-Teaching-Kit-Accelerated-Computing.zip

Continuing from Module 7 - Parallel Computation Patterns (Histogram)

Read Chapters 5 to 9.

PAR Class 9, Thu 2022-02-10

1 Nvidia GPU and accelerated computing, part 3

This is from https://developer.nvidia.com/teaching-kits-downloads

My local copy of what I'm using is in /parclass/2022/files/nvidia/GPU-Teaching-Kit-Accelerated-Computing.zip

Continuing from Lecture 3.5 – CUDA Parallelism Model.

Read Chapter 3:

Module 3 - CUDA Parallelism Model/EBook Chapters/3rd-Edition-Chapter03-scalable-parallel-execution.pdf

Exercise from p66:

Assume that a CUDA device allows up to 8 blocks and 1024 threads per SM, whichever becomes a limitation first. Furthermore, it allows up to 512 threads in each block. For image blur, should we use 8 × 8, 16 × 16, or 32 × 32 thread blocks?

2 Lecture Tech Limitations

  1. I want to project stuff and run programs during the class. Maybe even record the lectures.

  2. If I project my ipad, I can project anything on its screen, like goodnotes and firefox. However it's hard to ssh to parallel, and webex doesn't want to record the screen. The ipad is not a full multiprocessing OS, perhaps to avoid taking customers from the macbook.

  3. If I project my linux thinkpad x12, I can also project anything on its screen, including ssh to parallel. However the projector forces the thinkpad's display to a different resolution that the touchscreen doesn't understand. Where I touch the screen with my stylus is not where the mark is made on the display. That makes writing legibly impossible.

    Webex can also broadcast and (usually) record the x12 screen.

  4. When lecturing from home, I use a p73 thinkpad with an NVidia GPU that can remotely display the ipad as another window with uxplay. Then I write on the ipad, and firefox and ssh from the thinkpad, while webex broadcasts the thinkpad. That all works until something crashes (usually in the middle of a lecture).

  5. Uxplay doesn't work on the x12 probably because it has integrated graphics.

  6. Long time ago, RPI's classrooms had blackboards on every available wall, so the prof could move from blackboard to blackboard. The only tech problem was that the chalk was messy. Also recording was by the students writing in their notebooks.

  7. At one time, the classrooms had transparency projectors with rolls of plastic. We wrote on them and rolled them up. For several years, I had the TAs xerox the marked up roll so I had a copy.

  8. At another time I wrote on individual plastic sheets on the transparency projector. Those were easier to xerox.

  9. I'm still thinking of other solutions. Suggestions are welcome.

PAR Class 8, Mon 2022-02-07

1 Nvidia GPU and accelerated computing

This is from https://developer.nvidia.com/teaching-kits-downloads

My local copy of what I'm using is in /parclass/2022/files/nvidia/GPU-Teaching-Kit-Accelerated-Computing.zip

Continuing from Module 2 - Introduction to CUDA C/Slides/Lecture-2-2-cuda-data-allocation-API.pdf

There is more information in the included book chapter.

There are some version skew problems with some of the debugging examples.

2 Lab

Part of the class will be a chance for you to play with some examples, some from a Stanford course. I'll walk around and help.

/parclass/2022/files/stanford/tutorials

PAR Homework 4, Mon 2022-02-07

Rules

  1. Due next Mon 2022-02-14

  2. Submit the answers to Gradescope.

  3. You may do homeworks in teams of 2 students. Create a gradescope team and make one submission with both names.

Questions

Do all 9 exercises at the end of chapter 2 of

Module 2 - Introduction to CUDA C/3rd-Edition-Chapter02-data-parallel-computing.pdf

They're in gradescope.

PAR Class 7, Thu 2022-02-03

1 OpenACC concluded

I recommend OpenACC for Programmers: Concepts and Strategies By Sunita Chandrasekaran, Guido Juckeland. Informit, Amazon. Github.

Chapter 4 is online for free as PDF and html.

2 Nvidia GPU and accelerated computing

This is from https://developer.nvidia.com/teaching-kits-downloads

My local copy of what I'm using is in /parclass/2022/files/nvidia/GPU-Teaching-Kit-Accelerated-Computing.zip

We'll spend several classes on this topic, speed reading through the slides.

3 Archivemount

This is today's programmer productivity tip, unrelated to parallel computing but generally useful.

Rather than explicitly extracting large zip archives or tarballs in order to read files in them, I use archivemount to create a virtual file system. This saves disk space. It doesn't stress git as much (fewer files). When reading, the I/O time is insignificantly increased. For some formats, you can even write. You can have more confidence that the zip file wasn't changed, than in a directory with perhaps hundreds of files.

PAR Class 6, Mon 2022-01-31

1 Broadcasting the class?

Would you like to point my phone at me lecturing and attempt broadcast and record it on webex?

2 Parallel.ecse, ctd

We'll continue the lab to help you get started.

3 OpenMP, ctd

  1. To compile foo.cc into foo, you can say make foo .

  2. Envars, e.g.

    OMP_NUM_THREADS=2

    See https://hpc-tutorials.llnl.gov/openmp/env_vars/

  3. My file common has useful stuff.

  4. ulimit -s 100000000 can be useful.

    In paged virtual memory, stacks are more flexible than apparent.

    Dunno if OMP_STACKSIZE does anything.

  5. OpenMP on GPU:

    https://www.nersc.gov/users/training/events/openmp-device-offload-sept-2021/

  6. I've just given you an intro.

4 OpenACC

  1. Newer than OpenMP.

  2. More abstract; compiler tries to determine what to parallelize.

  3. Your algorithm needs to be parallelizable. That's the hardest part.

  4. https://www.openacc.org/

  5. I'll go quickly through the tutorials from https://www.openacc.org/events/openacc-online-course-2018 .

    You can read more on your own.

5 My teaching strategy

Leave you wishing that I'd taught you more.

PAR Homework 3, Thu 2022-01-27

Rules

  1. Due next Thu 2022-02-03

  2. Submit the answers to Gradescope.

  3. You may do homeworks in teams of 2 students. Create a gradescope team and make one submission with both names.

Questions

  1. (10 pts) Investigate the effects of varying numbers of threads on elapsed time and errors. Test sum, sum_atomic, sum_crit, sum_reduc with 1, 2, 8, 16, 32, 64 threads. Make appropriate mods so that the programs run slowly enough for you to measure the elapsed times.

  2. (10 pts) Look at https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-user-guide/index.html#openmp-use

    Try to modify sum_atomic and sum_crit to run on the GPU (using OpenMP). Full points for a good try, whether or not you succeed.

  3. (5 pts) Why is matmul4 segfaulting on n=1000? Try to fix it.

Total: 25

PAR Class 4, Mon 2022-01-24

1 LLNL tutorial, ctd

https://hpc.llnl.gov/documentation/tutorials/introduction-parallel-computing-tutorial

Starting at SPMD and MPMD. You can study the rest on your own. We'll see what it teaches by example.

2 parallel.ecse

  1. The full name is parallel.ecse.rpi.edu .

  2. I've set up accounts for you.

  3. Your user name is your RCSID. (However your UID is unrelated to your RCS UID).

  4. Its accounts are completely separate from RCS; I will use the same userid for convenience. Changing your password on parallel does not affect your RCS password, and vv.

  5. In class, I'll tell you how I created the passwords. Change yours today or I'll disable your account.

  6. From a linux machine, connect with ssh. I recommend setting up a key pair.

  7. From outside RPI, you need to use the vpn.

  8. Your default shell is bash. Feel free to change it. I use zsh. If changing your shell, look at these last lines of /etc/bash.bashrc and adapt them accordingly.:

    # nvidia; from https://docs.nvidia.com/hpc-sdk/hpc-sdk-install-guide/index.html
    NVARCH=`uname -s`_`uname -m`; export NVARCH
    NVCOMPILERS=/opt/nvidia/hpc_sdk
    MANPATH=$MANPATH:$NVCOMPILERS/$NVARCH/22.1/compilers/man
    PATH=$PATH:$NVCOMPILERS/$NVARCH/22.1/compilers/bin
    CUDA_PATH=$NVCOMPILERS/$NVARCH/22.1/cuda
    LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_PATH/lib64
    CUDA_INCLUDE=/opt/nvidia/hpc_sdk/Linux_x86_64/22.1/math_libs/11.5/targets/x86_64-linux/include
  9. Files used in the course will generally be put into /parclass2022/files/ .

  10. Play with your account, e.g., compiling a small program.

  11. Use NVIDIA's C++ compiler nvc++. You're welcome to download it from NVIDIA and install it on your own machine. That requires a free NVIDIA developer account.

  12. nvidia-smi will describe the GPUs.

3 OpenMP

  1. Versions: OpenMP is a living project, that is being updated every few years.

    1. There is a lot of free online documentation, examples, and tutorials.

    2. New features are regularly added, such as support for GPU backends.

    3. However the documentation and implementations lags.

    4. I'll use obsolete documentation when it's good enough. It's all still correct and fine for most applications.

    5. Then I'll present recent some additions.

  2. We'll see some examples running on parallel, at /parclass2022/files/openmp/rpi

  3. We saw that running even a short OpenMP program could be unpredictable. The reason for that was a big lesson.

    There are no guarantees about the scheduling of the various threads. There are no guarantees about fairness of resource allocation between the threads. Perhaps thread 0 finishes before thread 1 starts. Perhaps thread 1 finishes before thread 2 starts. Perhaps thread 0 executes 3 machine instructions, then thread 1 executes 1 instruction, then thread 0 executes 100 more, etc. Each time the process runs, something different may happen. One thing may happen almost all the time, to trick you.

    This happened during the first space shuttle launch in 1981. A 1-in-70 chance event prevented the computers from synchonizing. This had been observed once before during testing. However when they couldn't get it to happen again, they ignored it.

    This interlacing of different threads can happen at the machine instruction level. The C statement K=K+3 can translate into the machine instructions

    LOAD K; ADD 3; STORE K.

    Let's color the threads red and blue.

    If the threads interlace thus:

    LOAD K; ADD 3; STORE K; LOAD K; ADD 3; STORE K

    then K increases by 6. If they interlace thus:

    LOAD K; ADD 3; LOAD K; ADD 3; STORE K; STORE K;

    then K increases by 3.

  4. This can be really hard to debug, particularly when you don't know where this is happening.

    One solution is to serialize the offending code, so that only one thread executes it at a time. The limit would be to serialize the whole program and not to use parallel processing.

  5. OpenMP has several mechanisms to help.

  6. A critical pragma serializes the following block. There are two considerations.

    1. You lose the benefits of parallelization on that block.

    2. There is an overhead to set up a critical block that I estimate might be 100,000 instructions.

  7. An atomic pragma serializes one short C statement, which must be one of a small set of allowed operations, such as increment. It matches certain atomic machine instructions, like test-and-set. The overhead is much smaller.

  8. If every thread is summing into a common total variable, the reduction pragma causes each thread to sum into a private subtotal, and then sum the subtotals. This is very fast.

  9. Another lesson is that sometimes you can check your program's correctness with an independent computation. For the trivial example of summing \(i^2\), use the formula

    \(\sum_{i=1}^N i^2 = \frac{N(N+1)(2N+1)}{6}\).

    There is a lot of useful algebra if you have the time to learn it. I, ahem, learned this formula in high school.

  10. Another lesson is that even when the program gives the same answer every time, it might still be consistently wrong.

  11. Another is that just including OpenMP facilities, like -fopenmp, into your program slows it down even if you don't use them.

  12. Another is that the only meaningful time metric is elapsed real time. One reason that CPU time is meaningless is the OpenMP sometimes pauses a thread's progress by making it do a CPU-bound loop. (That is also a common technique in HW.)

  13. Also note that CPU times can vary considerably with successive executions.

  14. Also, using too many threads might increase the real time.

  15. Finally, floating point numbers have their own problems. They are an approximation of the mathematical concept called the real number field. That is defined by 11 axioms that state obvious properties, like

    A+B=B+A (commutativity) and

    A+(B+C)=(A+B+C) (associativity).

    This is covered in courses on modern algebra or abstract algebra.

    The problem is that most of these are violated, at least somewhat, with floats. E.g.,

    \(\left(10^{20}-10^{20}\right)+1=0+1=1\) but

    \(10^{20}+\left(-10^{20}+1\right)=10^{20}-10^{20}=0\).

    Therefore when threads execute in a different order, floating results might be different.

    There is no perfect solution, though using double is a start.

    1. On modern CPUs, double is just as fast as float.

    2. However it's slower on GPUs.

      How much slower can vary. Nvidia's Maxwell line spent very little real estate on double precision, so it was very slow. In contrast, Maxwell added a half-precision float, for implementing neural nets. Apparently neural nets use very few significant digits.

      Nvidia's newer Pascal line reverted this design choice and spends more real estate on implementing double. parallel.ecse's GPU is Pascal.

      Nvidia's Volta generation also does doubles well;

    3. It also requires moving twice the data, and data movement is often more important than CPU time.

    The large field of numerical analysis is devoted to finding solutions, with more robust and stable algorithms.

    Summing an array by first sorting, and then summing the absolutely smaller elements first is one technique. Inverting a matrix by pivoting on the absolutely largest element, instead of on \(a_{11}\) is another.

  16. Now that we've seen some examples, look at the LLNL tutorial. https://hpc-tutorials.llnl.gov/openmp/

  17. Another nice, albeit obsolescent and incomplete, ref: Wikibooks OpenMP.

  18. Its tasks examples are interesting.

  19. OpenMP tasks

    1. While inside a pragma parallel, you queue up lots of tasks - they're executed as threads are available.

    2. My example is tasks.cc with some variants.

    3. taskfib.cc is modified from an example in the OpenMP spec.

      1. It shows how tasks can recursively spawn more tasks.

      2. This is only an example; you would never implement fibonacci that way. (The fastest way is the following closed form. In spite of the sqrts, the expression always gives an integer. )

        \(F_n = \frac{\left( \frac{1+\sqrt{5}}{2} \right) ^n - \left( \frac{1-\sqrt{5}}{2} \right) ^n }{\sqrt{5}}\)

      3. Spawning a task is expensive; we can calculate the cost.

  20. omps.cc - read assorted OMP variables.

  21. Since I/O is not thread-safe, in hello_single_barrier.cc it has to be protected by a critical and also followed by a barrier.

  22. OpenMP sections - my example is sections.cc with some variants.

    1. Note my cool way to print an expression's name followed by its value.

    2. Note the 3 required levels of pragmas: parallel, sections, section.

    3. The assignment of sections to threads is nondeterministic.

    4. IMNSHO, OpenMP considerably easier than pthreads, fork, etc.

  23. http://stackoverflow.com/questions/13788638/difference-between-section-and-task-openmp

  24. OpenMP is weak on compiling for GPUs. So....

PAR Homework 2, Thu 2022-01-20

Rules

  1. Due next Thu 2022-01-27.

  2. Submit the answers to Gradescope.

  3. You may do homeworks in teams of 2 students. Create a gradescope team and make one submission with both names.

  4. Short but informative answers are fine.

Questions

Each is 5 points.

  1. Why is one CUDA core less powerful than one Intel Xeon core?

  2. If 10% of a program cannot be parallelized, then what is the max speedup even with infinite parallelism?

  3. "For short running parallel programs, there can actually be a decrease in performance compared to a similar serial implementation.". Why?

  4. What is the difference between strong scaling and weak scaling?

  5. What is cache coherency?

  6. Define "embarrassingly parallel".

  7. How many CUDA cores does the RTX 8000 have?

  8. Why have machine cycle speeds stopped increasing?

  9. Give an advantage and a disadvantage of shared memory versus distributed memory for parallel computation.

  10. Although I am talking about parallel computing on NVIDIA, the AMD Radeon is a possible competitor. Write a paragraph (say, 100 words) comparing the 2.

Total: 50 pts.

PAR Class 1, Mon 2022-01-10

Table of contents

1 Material

  1. Read the syllabus.

  2. Read https://computing.llnl.gov/tutorials/parallel_comp/ for an intro to parallel computing.

  3. Some points:

    1. Parallel computing is decades old; there were commercial machines in the 1980s. I directed two PhD theses in parallel geometry then. However, then, clocks speeds were increasing so serial machines were more interesting.

    2. Now: physics limits to processor speed.

    3. History of Nvidia.

      1. Curtis Priem had designed graphics HW for both IBM and Sun Microsystems. (For awhile Sun was THE Unix workstation company. They used open standards and had the best price / performance.)

      2. Nvidia designed gaming graphics accelerators...

      3. that just happened to be parallel coprocessors...

      4. that started to be used for nongraphics parallel processing because of their value.

      5. Nvidia noticed that and added more capability, e.g., double precision IEEE floats, to serve that market.

      6. Currently, some of the highest performance Nvidia boards cannot even do graphics because they don't have video out ports.

    1. Intel CPUs vs Nvidia CUDA cores.

    2. Advantages and disadvantages of shared memory.

    3. OpenMP vs CUDA.

    4. Rate-limiting cost usually I/O not computation.

2 Homework 1

is online, due next Mon.

PAR Homework 1, Mon 2022-01-10

Student talks - 1st set

  1. (10 points) Pick two topics from this list.

    1. Describe the fastest 2 computers on the top500 list https://www.top500.org/lists/top500/2020/11/

    2. Describe computers 3 and 4 on top500.

    3. Describe computers 5 and 6 on top500.

    4. Describe the Blue Gene.

    5. Summarize python's parallel capabilities.

    6. Summarize Matlab's parallel capabilities.

    7. Summarize Mathematica's parallel capabilities.

    8. Summarize MPI.

    9. Describe an application where physics needs/uses parallel computing.

    10. Describe an application where astronomy needs/uses parallel computing.

    11. Describe an application where biology needs/uses parallel computing.

    12. Give an application where machine learning needs/uses parallel computing.

    13. Give an application where computational fluid dynamics needs/uses parallel computing.

    14. Summarize OpenACC.

    15. Summarize pthreads.

    16. Other related topic of your choice.

  2. Email me with your choices.

  3. I'll assign you your first topic that wasn't already taken.

  4. Two students may work together to present one talk.

  5. Prepare and give 5-10 minute talk next Thu 1/20. We'll spread the talks out evenly. A team would jointly give one 3-5 minute talk.

    I won't block your mike at 10 minutes, but will leave it to you to be responsible. There are 16 students in class. The class is 80 minutes long. If necessary, we'll run over to next Mon.

PAR Syllabus

1 Catalog info

Title

ECSE-4740-01 Applied Parallel Computing for Engineers

Semesters

Spring term annually

Credits

3 credit hours

Time and place

Mon and Thurs noon-1:20pm. Online for the 1st 2 weeks, then in class

2 Description

  1. This is intended to be a computer engineering course to provide students with knowledge and hands-on experience in developing applications software for affordable parallel processors. This course will mostly cover hardware that any lab can afford to purchase. It will cover the software that, in the prof's opinion, is the most useful. There will also be some theory.

  2. The target audiences are ECSE seniors and grads and others with comparable background who wish to develop parallel software.

  3. This course will have minimal overlap with parallel courses in Computer Science. We will not teach the IBM BlueGene, because it is so expensive, nor cloud computing and MPI, because most big data problems are in fact small enough to fit on our hardware.

  4. You may usefully take all the parallel courses at RPI.

  5. This unique features of this course are as follows:

    1. For 2/3 of the course, use of only affordable hardware that any lab might purchase, such as Nvidia GPUs. This is currently the most widely used and least expensive parallel platform.

    2. Emphasis on learning several programming packages, at the expense of theory. However you will learn a lot about parallel architecture.

  6. Hardware taught, with reasons:

    Multicore Intel Xeon

    universally available and inexpensive, comparatively easy to program, powerful

    Nvidia GPU accelerator

    widely available (Nvidia external graphics processors are on 1/3 of all PCs), very inexpensive, powerful, but harder to program. Good cards cost only a few hundred dollars.

    Quantum computers

    It's new and hot and might some day be useful.

  7. Software that might be taught, with reasons:

    OpenACC C++ extension

    widely used, easy to use if your algorithm is parallelizable, backend is primarily multicore Xeon but also GPUs.

    Thrust C++ functional programming library

    FP is nice, hides low level details, backend can be any major parallel platform.

    MATLAB, Mathematica

    easy to use parallelism for operations that they have implemented in parallel, etc.

    CUDA C++ extension and library for Nvidia

    low level access to Nvidia GPUs.

    nvc++

    Nvidia's parallel C++ compiler.

  8. The techniques learned here will also be applicable to larger parallel machines -- numbers 2, 3, 5, and 6 on the top 500 list use NVIDIA GPUs.

  9. Effectively programming these processors will require in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors.

3 Prerequisite

ECSE-2660 CANOS or equivalent, knowledge of C++,.

4 Professor

/files/wrf2013.jpg

W. Randolph Franklin. BSc (Toronto), AM, PhD (Harvard)

Office

online only

Phone

+1 (518) 276-6077 (forwards)

Email

frankwr@YOUKNOWTHEDOMAIN or mailATwrfranklinDOTorg or wrfranklinATpmDOTme

Email is my preferred communication medium. I generally send email from my own domain.

Sending from non-RPI accounts are fine; I do that. But please show your name, at least in the comment field. A subject prefix of #Par is helpful. I support encrypted email, either through inline GPG, or via protonmail.

We can also use webex or facetime.

Web

https://wrf.ecse.rpi.edu/

A quick way to get there is to google RPIWRF.

Office hours

At 3:20 pm after my probability lecture, usually as long as anyone wants to talk. Also by appointment.

Informal meetings

If you would like to talk with me, either individually or in a group, just mention it. We can then talk about most anything legal and ethical.

Why I'm teaching this course

I asked to create and teach it, because I like the topic.

5 Course websites

The homepage has lecture summaries, syllabus, homeworks, etc.

6 Reading material

6.1 Text

There is no required text, but the following inexpensive books may be used. I might mention others later.

  1. Sanders and Kandrot, CUDA by example. It gets excellent reviews, although it is several years old. Amazon has many options, including Kindle and renting hardcopies.

  2. Kirk and Hwu, 2nd edition, Programming massively parallel processors. It concentrates on CUDA.

One problem is that even recent books may be obsolete. For instance they may ignore the recent CUDA unified memory model, which simplifies CUDA programming at a performance cost. Even if the current edition of a book was published after unified memory was released, the author might not have updated the examples.

6.2 Web

There is a lot of free material on the web, which I'll reference class by class. Because web pages are vanish so often (really!), I may cache some locally. We will start here:

https://hpc.llnl.gov/training/tutorials

7 Computer systems used

7.1 parallel.ecse

This course will primarily use (remotely via ssh) parallel.ecse.rpi.edu.

Parallel has:

  1. dual 14-core Intel Xeon E5-2660 2.0GHz

  2. 256GB of DDR4-2133 ECC Reg memory

  3. Nvidia GPUs:

    1. Quadro RTX 8000 with 48GB memory, 16 TFLOPS, and 4608 CUDA cores. This can do real time ray tracing.

    2. GeForce GTX 1080 with 8GB memory and 2569 CUDA cores.

  4. Samsung Pro 850 1TB SSD

  5. WD Red 6TB 6GB/s hard drive

  6. CUDA 10.1

  7. OpenMP 4.5

  8. Thrust

  9. Ubuntu 21.10

Material for the class is stored in /parallel-class/ .

7.2 Amazon EC2

We might also use parallel virtual machines on the Amazon EC2. If so, you will be expected to establish an account. I expect the usage to be in the free category.

7.3 Gradescope

Gradescope will be used for you to submit homeworks and for us to distribute grades.

7.4 WebEx

to deliver lectures, to discuss, etc.

7.5 LMS

This is only for distributing the computed (total) grades. (Gradescope has no way to upload computed grades.)

7.6 Others

Delivering good remote courses is a work in progress. If I learn of a new tool that looks useful, I might use it.

8 Assessment measures, i.e., grades

  1. There will be no exams.

  2. Each student will make several in-class presentations summarizing some relevant topic.

  3. There will be a term project.

8.1 Homeworks

There will be some homeworks.

You may do homeworks in teams of up to 3 students. Create a gradescope team and make one submission with all the names.

You may switch teams from homework to homework.

8.2 Term project

  1. For the latter part of the course, most of your homework time will be spent on a term project.

  2. You are encouraged do it in teams of any size. A team of 3 people would be expected to do twice as much work as 1 person.

  3. You may combine this with work for another course, provided that both courses know about this and agree. I always agree.

  4. If you are a grad student, you may combine this with your research, if your prof agrees, and you tell me.

  5. You may build on existing work, either your own or others'. You have to say what's new, and have the right to use the other work. E.g., using any GPLed code or any code on my website is automatically allowable (because of my Creative Commons licence).

  6. You will implement, demonstrate, and document something vaguely related to parallel computing.

  7. Deliverables:

    1. An implementation showing parallel computing.

    2. An extended abstract or paper on your project, written up like a paper. You should follow the style guide for some major conference (I don't care which, but can point you to one).

    3. A more detailed manual, showing how to use it.

    4. A 2-minute project proposal given to the class around the middle of the semester.

    5. A 10-minute project presentation and demo given to the class in the last week.

    6. Some progress reports.

    7. A write-up uploaded on the last class day. This will contain an academic paper, code and perhaps video or user manual.

  8. Size

    It's impossible to specify how many lines of code makes a good term project. E.g., I take pride in writing code that is can be simultaneously shorter, more robust, and faster than some others. See my 8-line program for testing whether a point is in a polygon: Pnpoly.

    According to Big Blues, when Bill Gates was collaborating with around 1980, he once rewrote a code fragment to be shorter. However, according to the IBM metric, number of lines of code produced, he had just caused that unit to officially do negative work.

9 Remote class procedures

Parallel will start virtual and then switch to in-person when RPI tells me to. I intend to lecture live, with some supplementary videos. If the format changes unexpectedly, I'll email everyone.

Since this is a small class, students will be encouraged to ask questions, live and with chat during the class. I'll try to save the chat window, but it may be deleted after each class. If you cannot watch the class live, then you may miss any info presented in the chat window.

Try to share your video when talking.

If you would like, I can try to set up a webex teams space for discussions between class.

I will maintain a class blog that briefly summarizes each class. Important points will be written down.

I will attempt to record all virtual classes and upload them to mediasite for students to watch or download later. However, this fall, about 10% of classes failed to record properly.

Depending on the class composition, since it increases learning, there might be graded in-class quizzes. That means, that I would like to do this. However if it causes too much trouble for too many students then I won't.

10 Early warning system (EWS)

As required by the Provost, we may post notes about you to EWS, for example, if you're having trouble doing homeworks on time, or miss an exam. E.g., if you tell me that you had to miss a class because of family problems, then I may forward that information to the Dean of Students office.

11 Academic integrity

See the Student Handbook for the general policy. The summary is that students and faculty have to trust each other. After you graduate, your most important possession will be your reputation.

Specifics for this course are as follows.

  1. You may collaborate on homeworks, but each team of 1 or 2 people must write up the solution separately (one writeup per team) using their own words. We willingly give hints to anyone who asks.

  2. The penalty for two teams handing in identical work is a zero for both.

  3. You may collaborate in teams of up to 3 people for the term project.

  4. You may get help from anyone for the term project. You may build on a previous project, either your own or someone else's. However you must describe and acknowledge any other work you use, and have the other person's permission, which may be implicit. E.g., my web site gives a blanket permission to use it for nonprofit research or teaching. You must add something creative to the previous work. You must write up the project on your own.

  5. However, writing assistance from the Writing Center and similar sources in allowed, if you acknowledge it.

  6. The penalty for plagiarism is a zero grade.

  7. Cheating will be reported to the Dean of Students Office.

12 Other rules required by RPI

You've seen them in your other classes. Assume that they're included here.

13 Student feedback

Since it's my desire to give you the best possible course in a topic I enjoy teaching, I welcome feedback during (and after) the semester. You may tell me or write me, or contact a third party, such as Prof James Lu, the ECSE undergrad head, or Prof John Wen, the ECSE Dept head.

PAR Class 3, Thu 2022-01-20

1 The files accessible under the FILES button

now have a new naming scheme to make them sort better.

2 No class 2 video

Something went wrong; sorry.

3 Why I lecture from material on web instead of putting it into my own words

My value to you lies on selecting what web material is worth showing to you, not in rewording it. I go through a lot of stuff and select a small fraction of it. Also, presenting the original material instead of my version of it is more accurate.

4 Student presentations

  1. Reagan and Allan. Blue Gene

  2. Ruisi. Physics

  3. Ean and Tom. Top 2 computers

  4. Kevin. 5th and 6th computers

  5. Zach. Python's parallel capabilities

  6. Mark & Phu. Pthreads

  7. Adrian-James. Astronomy

  8. Cohen. 3rd and 4th computers

  9. Amanda. OpenACC

  10. Ivan. Portability issues in parallel designs

  11. Ryan. Machine learning

  12. Saaif. Gaming

PAR Class 2, Thu 2022-01-13

1 Note the files button in the top bar

In that dir:

class??.mp4

has videos of the remote classes.

transcript??.txt

generated by Webex for those videos

handwritten??.txt

my handwritten notes during the class

misc other files

used in class

2 LLNL parallel tutorial

We'll do it. https://computing.llnl.gov/tutorials/parallel_comp/

3 Material

  1. Think about these questions.

    1. Why have machine cycle speeds stopped increasing?

    2. What architectures do the top 3 machines on the Top 500 list use?

    3. Which one of the following 4 choices are most GPUs: SISD, SIMD, MISD, MIMD.

    4. Which one of the following 4 choices are most current multicore CPUs: SISD, SIMD, MISD, MIMD.

    5. Per Amdahl's law, if a program is 10% sequential and 90% parallelizable, what is the max speed up that can be obtained with an infinite number of parallel processors?

4 Recently obsoleted parallel tech

  1. Intel Xeon Phi

  2. IBM BlueGene

5 My research

I do parallel geometry algorithms on large problems for CAD and GIS. See my home page. If this is interesting, talk to me. Maybe you can do something that leads to a jointly-authored paper.