Hybrid MPI+UPC
(6 intermediate revisions by one user not shown) | |||
Line 1: | Line 1: | ||
− | This page is intended to serve as early documentation on hybrid parallel programming using MPI and Unified Parallel C. | + | The Hybrid MPI+UPC programming model is a new approach to writing parallel programs that combines MPI and UPC in the same program. This model has several important benefits. It provides MPI programs with access to a large distributed shared global address space with significant advantages when compared with MPI-2 one-sided communication. It provides UPC programmers with an additional level of locality control that can be used to create multiple UPC spaces connected via MPI. Finally, it can provide access to libraries written in either model, e.g. access to PETSc and ScaLAPACK for UPC programmers. |
+ | |||
+ | This page is intended to serve as a collection of early documentation on hybrid parallel programming using MPI and Unified Parallel C. | ||
=Papers= | =Papers= | ||
Line 9: | Line 11: | ||
=Building the Environment= | =Building the Environment= | ||
− | Many hybrid MPI and UPC setups are possible. In our work, we have used the GCCUPC compiler with the Berkeley UPC runtime. The instructions here are focused on building the hybrid setup on an Infiniband cluster. MPI and the BUPC runtime must be | + | Many hybrid MPI and UPC setups are possible. In our work, we have used the GCCUPC compiler with the Berkeley UPC runtime. The instructions here are focused on building the hybrid setup on an Infiniband cluster. MPI and the BUPC runtime must be built with a compiler that is ABI compatible (i.e. produces binaries that can be linked by) with GCC. MPI does not need to be compiled with the UPC compiler, it only needs to be able to be linked in using the UPC compiler. |
− | == | + | ==DISCLAIMER== |
− | The current state of the hybrid MPI+UPC environment is in the proof-of-concept phase. We are working to improve the level of support and interoperability, but at present there are many caveats and challenges involved in setting up the | + | The current state of the hybrid MPI+UPC environment is still in the proof-of-concept phase. We are working to improve the level of support and interoperability, but at present there are many caveats and challenges involved in setting up and using the model. These instructions are intended for hybrid MPI+UPC developers, not end users. Expect things to be broken right now. |
==GCCUPC== | ==GCCUPC== | ||
Line 19: | Line 21: | ||
# Download GCCUPC [http://www.gccupc.org/] | # Download GCCUPC [http://www.gccupc.org/] | ||
# Untar the source code | # Untar the source code | ||
− | # Make a separate build directory | + | # Make a separate build directory. The build should not be done in the source directory. |
# Run configure from the build directory, no special options are needed | # Run configure from the build directory, no special options are needed | ||
+ | ## Example: ../upc-4.2.3.6/configure --prefix=$HOME/opt/gccupc --enable-languages=c,upc | ||
# Build and install GCCUPC | # Build and install GCCUPC | ||
Line 27: | Line 30: | ||
# Download the BUPC runtime system [http://upc.lbl.gov/download/source.shtml] | # Download the BUPC runtime system [http://upc.lbl.gov/download/source.shtml] | ||
# Configure the runtime with the following options: | # Configure the runtime with the following options: | ||
− | ## . | + | ## ''--with-multiconf=opt_gccupc'' Build with the GCCUPC translator (note: ''upc'' binary must be in your path at this point) |
+ | ## ''--with-ibv-spawner=ssh --with-vapi-spawner=ssh --disable-mpi'' Prevent BUPC from using MPI, even in the bootstrap. | ||
+ | ## ''--prefix=$HOME/opt'' Installation path | ||
+ | ## ''--disable-aligned-segments'' May also be needed | ||
# Build and install the BUPC runtime | # Build and install the BUPC runtime | ||
Line 36: | Line 42: | ||
==Hydra Process Manager== | ==Hydra Process Manager== | ||
− | Build and install the Hydra process manager. | + | Hydra is needed for launching nested hybrid programs. |
+ | Build and install the Hydra process manager, no special options are needed. | ||
+ | |||
+ | =Caveats= | ||
+ | |||
+ | ==Writing Hybrid Codes== | ||
+ | |||
+ | The Berkeley UPC and MPI runtimes are not fully compatible and mixing MPI and UPC communication can result in deadlock. Therefore, it is recommended to separate MPI and UPC communication with barriers. | ||
+ | |||
+ | ==Running Hybrid Codes== | ||
− | + | The UPC launcher does not always propagate MPI's process management information in the environment correctly. To fix this, place the following lines in your code before you initialize MPI: | |
− | + | setenv("PMI_PORT", bupc_getenv("PMI_PORT"), 1); | |
+ | setenv("PMI_ID", bupc_getenv("PMI_ID"), 1); |
Latest revision as of 21:24, 5 May 2010
The Hybrid MPI+UPC programming model is a new approach to writing parallel programs that combines MPI and UPC in the same program. This model has several important benefits. It provides MPI programs with access to a large distributed shared global address space with significant advantages when compared with MPI-2 one-sided communication. It provides UPC programmers with an additional level of locality control that can be used to create multiple UPC spaces connected via MPI. Finally, it can provide access to libraries written in either model, e.g. access to PETSc and ScaLAPACK for UPC programmers.
This page is intended to serve as a collection of early documentation on hybrid parallel programming using MPI and Unified Parallel C.
Contents |
Papers
Hybrid Parallel Programming with MPI and Unified Parallel C pdf
James Dinan, Pavan Balaji, Ewing Lusk, P. Sadayappan, Rajeev Thakur.
Proc. 7th ACM Conf. on Computing Frontiers (CF). Bertinoro, Italy. May 17-19, 2010.
Building the Environment
Many hybrid MPI and UPC setups are possible. In our work, we have used the GCCUPC compiler with the Berkeley UPC runtime. The instructions here are focused on building the hybrid setup on an Infiniband cluster. MPI and the BUPC runtime must be built with a compiler that is ABI compatible (i.e. produces binaries that can be linked by) with GCC. MPI does not need to be compiled with the UPC compiler, it only needs to be able to be linked in using the UPC compiler.
DISCLAIMER
The current state of the hybrid MPI+UPC environment is still in the proof-of-concept phase. We are working to improve the level of support and interoperability, but at present there are many caveats and challenges involved in setting up and using the model. These instructions are intended for hybrid MPI+UPC developers, not end users. Expect things to be broken right now.
GCCUPC
- Download GCCUPC [1]
- Untar the source code
- Make a separate build directory. The build should not be done in the source directory.
- Run configure from the build directory, no special options are needed
- Example: ../upc-4.2.3.6/configure --prefix=$HOME/opt/gccupc --enable-languages=c,upc
- Build and install GCCUPC
Berlekey UPC Runtime
- Download the BUPC runtime system [2]
- Configure the runtime with the following options:
- --with-multiconf=opt_gccupc Build with the GCCUPC translator (note: upc binary must be in your path at this point)
- --with-ibv-spawner=ssh --with-vapi-spawner=ssh --disable-mpi Prevent BUPC from using MPI, even in the bootstrap.
- --prefix=$HOME/opt Installation path
- --disable-aligned-segments May also be needed
- Build and install the BUPC runtime
MPI
Build and install MVAPICH [3]. No special options are needed.
Hydra Process Manager
Hydra is needed for launching nested hybrid programs. Build and install the Hydra process manager, no special options are needed.
Caveats
Writing Hybrid Codes
The Berkeley UPC and MPI runtimes are not fully compatible and mixing MPI and UPC communication can result in deadlock. Therefore, it is recommended to separate MPI and UPC communication with barriers.
Running Hybrid Codes
The UPC launcher does not always propagate MPI's process management information in the environment correctly. To fix this, place the following lines in your code before you initialize MPI:
setenv("PMI_PORT", bupc_getenv("PMI_PORT"), 1); setenv("PMI_ID", bupc_getenv("PMI_ID"), 1);