M1. Suppose that every node in a network produces messages at 
    a constant rate, and that each network link has some fixed
    capacity.  In class we discussed how, given this assumption,
    a grid network does not scale up to arbitrarily large sizes,
    because the network can become ``saturated'' with messages.

    (a.) Explain why this happens.

    (b.) Can a hypercube become saturated in the same way?
         Why or why not?



M2. Message passing programs have an associated data-flow graph
    defined by the pattern of messages, that reveals a program's
    underlying concurrency.  Do shared memory programs have
    similar dataflow graphs?  If so, discuss how they might be
    extracted, for example by an optimizing compiler; if not,
    explain why not.