Hybrid MPI+UPC

From HPCRL Wiki
Revision as of 21:24, 5 May 2010 by Jim (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The Hybrid MPI+UPC programming model is a new approach to writing parallel programs that combines MPI and UPC in the same program. This model has several important benefits. It provides MPI programs with access to a large distributed shared global address space with significant advantages when compared with MPI-2 one-sided communication. It provides UPC programmers with an additional level of locality control that can be used to create multiple UPC spaces connected via MPI. Finally, it can provide access to libraries written in either model, e.g. access to PETSc and ScaLAPACK for UPC programmers.

This page is intended to serve as a collection of early documentation on hybrid parallel programming using MPI and Unified Parallel C.



Hybrid Parallel Programming with MPI and Unified Parallel C pdf
James Dinan, Pavan Balaji, Ewing Lusk, P. Sadayappan, Rajeev Thakur.
Proc. 7th ACM Conf. on Computing Frontiers (CF). Bertinoro, Italy. May 17-19, 2010.

Building the Environment

Many hybrid MPI and UPC setups are possible. In our work, we have used the GCCUPC compiler with the Berkeley UPC runtime. The instructions here are focused on building the hybrid setup on an Infiniband cluster. MPI and the BUPC runtime must be built with a compiler that is ABI compatible (i.e. produces binaries that can be linked by) with GCC. MPI does not need to be compiled with the UPC compiler, it only needs to be able to be linked in using the UPC compiler.


The current state of the hybrid MPI+UPC environment is still in the proof-of-concept phase. We are working to improve the level of support and interoperability, but at present there are many caveats and challenges involved in setting up and using the model. These instructions are intended for hybrid MPI+UPC developers, not end users. Expect things to be broken right now.


  1. Download GCCUPC [1]
  2. Untar the source code
  3. Make a separate build directory. The build should not be done in the source directory.
  4. Run configure from the build directory, no special options are needed
    1. Example: ../upc- --prefix=$HOME/opt/gccupc --enable-languages=c,upc
  5. Build and install GCCUPC

Berlekey UPC Runtime

  1. Download the BUPC runtime system [2]
  2. Configure the runtime with the following options:
    1. --with-multiconf=opt_gccupc Build with the GCCUPC translator (note: upc binary must be in your path at this point)
    2. --with-ibv-spawner=ssh --with-vapi-spawner=ssh --disable-mpi Prevent BUPC from using MPI, even in the bootstrap.
    3. --prefix=$HOME/opt Installation path
    4. --disable-aligned-segments May also be needed
  3. Build and install the BUPC runtime


Build and install MVAPICH [3]. No special options are needed.

Hydra Process Manager

Hydra is needed for launching nested hybrid programs. Build and install the Hydra process manager, no special options are needed.


Writing Hybrid Codes

The Berkeley UPC and MPI runtimes are not fully compatible and mixing MPI and UPC communication can result in deadlock. Therefore, it is recommended to separate MPI and UPC communication with barriers.

Running Hybrid Codes

The UPC launcher does not always propagate MPI's process management information in the environment correctly. To fix this, place the following lines in your code before you initialize MPI:

setenv("PMI_PORT", bupc_getenv("PMI_PORT"), 1);
setenv("PMI_ID", bupc_getenv("PMI_ID"), 1);
Personal tools