Models and Scheduling Algorithms for Mixed Data and Task Parallel Programs

Chakrabarti, Soumen ; Demmel, James ; Yelick, Katherine (1997) Models and Scheduling Algorithms for Mixed Data and Task Parallel Programs Journal of Parallel and Distributed Computing, 47 (2). pp. 168-184. ISSN 07437315

Full text not available from this repository.

Official URL: http://doi.org/10.1006/jpdc.1997.1413

Related URL: http://dx.doi.org/10.1006/jpdc.1997.1413

Abstract

An increasing number of scientific programs exhibit two forms of parallelism, often in a nested fashion. At the outer level, the application comprises coarse-grained task parallelism, with dependencies between tasks reflected by an acyclic graph. At the inner level, each node of the graph is a data-parallel operation on arrays. Designers of languages, compilers, and runtime systems are building mechanisms to support such applications by providing processor groups and array remapping capabilities. In this paper we explore how to supplement these mechanisms with policy. What properties of an application, its data size, and the parallel machine determine the maximum potential gains from using both kinds of parallelism? It turns out that large gains can be expected only for specific task graph structures. For such applications, what are practical and effective ways to allocate processors to the nodes of the task graph? In principle one could solve the NP-complete problem of finding the best possible allocation of arbitrary processor subsets to nodes in the task graph. Instead of this, our analysis and simulations show that a simpleswitchedscheduling paradigm, which alternates between pure task and pure data parallelism, provides nearly optimal performance for the task graphs considered here. Furthermore, our scheme is much simpler to implement, has less overhead than the optimal allocation, and would be attractive even if the optimal allocation was free to compute. To evaluate switching in real applications, we implemented a switching task scheduler in the parallel numerical library ScaLAPACK and used it in a nonsymmetric eigenvalue program. Even for fairly large input sizes, the efficiency improves by factors of 1.5 on the Intel Paragon and 2.5 on the IBM SP-2. The remapping and scheduling overhead is negligible, between 0.5 and 5%.

Item Type:Article
Source:Copyright of this article belongs to Elsevier B.V
ID Code:131006
Deposited On:02 Dec 2022 06:06
Last Modified:02 Dec 2022 06:06

Repository Staff Only: item control page