Spring 2001 Parallel and Distributed Processing (CMSC 483/691p)

Course Information

What's new?

May 15: HW 4 is assigned and is due on May 23, the last day of finals. It consists of problems 8.9, 9.11, 9.20, and 10.8 from your text, and problems M3, M4, and M5 that I have added, and is worth 30 points.

May 15: A simple MPI demo program and demo Makefile are available for MPI as installed on irix1.gl; the makefile shows how to link to the MPI libraries there. Try compiling the example program (bogey.c) and then running it with "mpirun -np 4 bogey". Note that the full path to mpirun on irix1.gl is

     /afs/umbc.edu/users/m/o/motteler/pub/mpi/bin/mpirun 
You can add the bin directory to your search path, make an alias, or just use the whole path if you prefer.

May 12: A simple Matlab viewer for n-body image sequences is now available.

May 8: I moved the MPI installation on irix1.gl to

     /afs/umbc.edu/users/m/o/motteler/pub/mpi .
The AFS protections should be set now for this to be world readable. In addition, I set up the MPI man pages to be served from
     http://asl.umbc.edu/pub/motteler/mpich/www/ .

What's old?

May 2: Project 3, the n-body problem with MPI, is assigned and is due on Tuesday May 22.

April 24: The due date for HW 3 has been extended to Tuesday May 1. Note that for question M1, we are assuming that all the messages are travelling a distance on the order of the longest path thru the network; if every process were to send its message to (for example) just the neighbor one link away to the west, then the grid would not eventually become saturated as it was scaled up.

April 12: HW 3 is assigned, and is due on April 26. It consists of problems 6.1, 6.10, 6.12, and 7.3 from your text, and problems M1 and M2 that I have added, and is worth 25 points.

April 3: If you could not get your project 1 to work you can fix it and resubmit it for another evaluation, with a maximum 20% late penalty; a more detailed example of m_fork threads for matrix multiplication may provide some useful hints.

April 3: The Matlab examples for Project 1 have been updated to include fwdtest.m and traintest.m, the procedures that I used in checking your first project. traintest.m is just the old nntest.m modified to call yor project rather than the matlab trainer, while fwdtest.m compares your forward calculation with the Matlab forward results.

April 2: As announced in class last Thursday, the due date for project 2 has been extended to Thursday, April 5.

March 21: A short tutorial on pthreads by Andrae Muys includes some helpful examples.

March 21: To complile and link a program with pthreads under SGI IRIX, you need to specify the pthreads library, for example

      cc -o nnfwd nnfwd.c -lpthread
Giving the threads system-wide scope, as for example with
      pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
from the matrix sum example from your text, may not work on our IRIX systems; if you simply leave this line out the example should work OK.

March 15: Some old course notes on message passing, for the undergraduate operating systems course, are now available online.

March 14: Project 2, neural nets with p-threads, is assigned, and is due on Tuesday, April 3.

March 12: The due date for HW 2 has been extended to Tuesday, March 27, the first Tuesday after spring break. Also, the late penalty for late undergrad projects is further reduced, to 3% per day.

March 9: Undergrads (those registered for 483, rather than 691P) will have a late penalty of 4% per day, rather than the usual 5%.

March 5: This week we will discuss implementation issues, covering material from Chapter 6 of your text, and data parallel programming, from Chapter 3. Next week we will begin distributed programming, starting with Chapter 7.

March 5: HW 2 is assigned, and is due on March 15. It consists of problems 3.7, 3.11, 4.4, 4.8, 4.11, 5.1, and 5.3, and is worth 25 points.

March 5: Note that the "help" message for the nninit.m example was not quite up to date; this has been fixed. The code itself has not been changed.

March 5: The programming examples for the Fall 1996 Parallel Processing course include an example of backprop training on the MasPar, a big SIMD machine. The example there is a bit more complicated than your project, as it is "production" code, and rather than using threads is doing tiling on a SIMD processor array, but the underlying training algorithm is the same. In the MasPar example directory, a ".m" extension is for MasPar, rather than Matlab, and the ".m" files there are an extension of C.

Feb 26: I have made some updates and added some fixes to the project description and Matlab demo procedures for Project 1. The main changes are that all ASCII vectors are saved as column vectors (previously the b vectors were columns, while the stats vectors were rows), and there is now some discussion of the Matlab procedures for initializing the weights and testing the training and forward calculation.

Feb 22: The Matlab examples for Project 1 have been updated to include nntrain.m, a Matlab demo of nntrain. The description of Project 1 has also been updated slightly; the main changes are: (1) nntrain no longer reads X and Y from the command line, instead it simply reads fixed file names "X" and "Y", (2) the mean and standard deviation of the training data are used to normalize both the training data and inputs to the forward calculation, and are saved along with the weight matrices and bias vectors, and (3) Matlab procedures are provided to initialize the weights and biases, calculate the mean and standard deviation of the training data, and to generate test data.

Feb 19: The Matlab examples for Project 1 have been updated to include nnfwd.m, a Matlab demo of nnfwd.c, and nntest1.m, a program to generate sample test data for nnfwd. I will put a similar demo for nntrain online shortly.

Feb 15: HW 1 is assigned, and is due on March 1. It consists of problems 1.12, 2.10, 2.12, 2.13, 2.15, 2.33, and 3.2, and is worth 20 points.

Feb 15: Project 1, neural nets with threads, is assigned, and is due on March 8.

Feb 14: The SGI threads examples work fine on all the campus SGI IRIX systems. If you don't know what sort of system you're on, "uname" will tell you the Unix you're running-- this must be IRIX (of some sort) to use SGI threads. On the SGI systems "hinv" gives an overview of system hardware; irix1.gl.umbc.edu is a 14-processor Challenge XL, and irix2.gl.umbc.edu is a 2-processor Origin 200. The CS department has a 1-processor system, sgiserver1.cs.umbc.edu, and maybe some multi-processor systems I don't know about. I will test your SGI threads projects on an 8-processor Challenge.

Feb 13: Some old course notes for the undergraduate operating systems course and the simple SGI threads demos are now available online. Today we will discuss fairness, from section 2.8 of your text, and selected material from chapter 3 and the online notes. We will also discuss project 1, an SGI threads implementation of a feed-forward neural network.

Feb 6: We will cover material from Chapter 3, and from the online notes, this week, and also discuss the SGI m_fork threads library.

Feb 1: We will finish with most of Chapter 2 the first week; we will skip sections 2.6 and 2.7 on program verification, just for now, but will include section 2.8 on fairness.

Jan 30: Welcome to Parallel and Distributed Processing! Most course information--including the syllabus, assorted handouts, and project assignments--will be made available on these web pages.