David Dinh, Harsha Vardhan Simhadri, and Yuan Tang
Abstract: The nested parallel (a.k.a. fork-join) model is widely used for writing parallel programs. However, the two composition constructs, i.e. “” (parallel) and “” (serial), are insufficient in expressing “partial dependencies” or “partial parallelism” in a program. We propose a new dataflow composition construct ``’’ to express partial dependencies in algorithms in a processor- and cache-oblivious way, thus extending the Nested Parallel (NP) model to the Nested Dataflow (ND) model. We redesign several divide-and-conquer algorithms ranging from dense linear algebra to dynamic-programming in the ND model and prove that they all have optimal span while retaining optimal cache complexity. We propose the design of runtime schedulers that map ND programs to multicore processors with multiple levels of possibly shared caches (i.e, Parallel Memory Hierarchies) and provide theoretical guarantees on their ability to preserve locality and load balance. For this, we adapt space-bounded (SB) schedulers for the ND model. We show that our algorithms have increased “parallelizability” in the ND model, and that SB schedulers can use the extra parallelizability to achieve asymptotically optimal bounds on cache misses and running time on a greater number of processors than in the NP model. The running time for the algorithms in this paper is , where is the cache complexity of task , is the cost of cache miss at level- cache which is of size , is a constant, and is the number of processors in an -level cache hierarchy.