This FAQ is for Open MPI v4.x and earlier.
If you are looking for documentation for Open MPI v5.x and later, please visit docs.open-mpi.org.
Table of contents:
- I'm a sysadmin; what do I care about Open MPI?
- What hardware / software / run-time environments / networks
does Open MPI support?
- Do I need multiple Open MPI installations?
- What are MCA Parameters? Why would I set them?
- Do my users need to have their own installation of Open MPI?
- I have power users who will want to override my global MCA
parameters; is this possible?
- What MCA parameters should I, the system administrator, set?
- I just added a new plugin to my Open MPI installation; do I need to recompile all my MPI apps?
- I just upgraded my Myrinet|Infiniband network; do I need to
recompile all my MPI apps?
- We just upgraded our version of Open MPI; do I need to
recompile all my MPI apps?
- I have an MPI application compiled for another MPI; will it
work with Open MPI?
1. I'm a sysadmin; what do I care about Open MPI? |
Several members of the Open MPI team have strong system
administrator backgrounds; we recognize the value of having software
that is friendly to system administrators. Here are some of the reasons
that Open MPI is attractive for system administrators:
- Simple, standards-based installation
- Reduction of the number of MPI installations
- Ability to set system-level and user-level parameters
- Scriptable information sources about the Open MPI installation
See the rest of the questions in this FAQ section for more details.
2. What hardware / software / run-time environments / networks
does Open MPI support? |
See this FAQ
category for more information
3. Do I need multiple Open MPI installations? |
Yes and no.
Open MPI can handle a variety of different run-time environments
(e.g., rsh/ssh, Slurm, PBS, etc.) and a variety of different
interconnection networks (e.g., ethernet, Myrinet, Infiniband, etc.)
in a single installation. Specifically: because Open MPI is
fundamentally powered by a component architecture, plug-ins for all
these different run-time systems and interconnect networks can be
installed in a single installation tree. The relevant plug-ins will
only be used in the environments where they make sense.
Hence, there is no need to have one MPI installation for Myrinet, one
MPI installation for ethernet, one MPI installation for PBS, one MPI
installation for rsh , etc. Open MPI can handle all of these in a
single installation.
However, there are some issues that Open MPI cannot solve. Binary
compatibility between different compilers is such an issue. Let's
examine this on a per-language basis (be sure see the big caveat at
the end):
The big caveat to all of this is that Open MPI will only work with
different compilers if all the datatype sizes are the same. For
example, even though Open MPI supports all 4 name mangling schemes,
the size of the Fortran LOGICAL type may be 1 byte in some compilers
and 4 bytes in others. This will likely cause Open MPI to perform
unpredictably.
The bottom line is that Open MPI can support all manner of run-time
systems and interconnects in a single installation, but supporting
multiple compilers "sort of" works (i.e., is subject to trial and
error) in some cases, and definitely does not work in other cases.
There's unfortunately little that we can do about this — it's a
compiler compatibility issue, and one that compiler authors have
little incentive to resolve.
4. What are MCA Parameters? Why would I set them? |
MCA parameters are a way to tweak Open MPI's behavior at
run-time. For example, MCA parameters can specify:
- Which interconnect networks to use
- Which interconnect networks not to use
- The size difference between eager sends and rendezvous protocol
sends
- How many registered buffers to pre-pin (e.g., for GM or mVAPI)
- The size of the pre-pinned registered buffers
- ...etc.
It can be quite valuable for a system administrator to play with such
values a bit and find an "optimal" setting for a particular
operating environment. These values can then be set in a global text
file that all users will, by default, inherit when they run Open MPI
jobs.
For example, say that you have a cluster with 2 ethernet networks —
one for NFS and other system-level operations, and one for MPI jobs.
The system administrator can tell Open MPI to not use the NFS TCP
network at a system level, such that when users invoke mpirun or
mpiexec to launch their jobs, they will automatically only be using
the network meant for MPI jobs.
See the run-time tuning FAQ
category for information on how to set global MCA parameters.
5. Do my users need to have their own installation of Open MPI? |
Usually not. It is typically sufficient for a single Open MPI
installation (or perhaps a small number of Open MPI installations,
depending on compiler interoperability) to serve an entire parallel
operating environment.
Indeed, a system-wide Open MPI installation can be customized on a
per-user basis in two important ways:
- Per-user MCA parameters: Each user can set their own set of MCA
parameters, potentially overriding system-wide defaults.
- Per-user plug-ins: Users can install their own Open MPI
plug-ins under
$HOME/.openmpi/components . Hence, developers can
experiment with new components without destabilizing the rest of the
users on the system. Or power users can download 3rd party components
(perhaps even research-quality components) without affecting other users.
6. I have power users who will want to override my global MCA
parameters; is this possible? |
Absolutely.
See the run-time tuning FAQ
category for information how to set MCA parameters, both at the
system level and on a per-user (or per-MPI-job) basis.
7. What MCA parameters should I, the system administrator, set? |
This is a difficult question and depends on both your specific
parallel setup and the applications that typically run there.
The best thing to do is to use the ompi_info command to see what
parameters are available and relevant to you. Specifically,
ompi_info can be used to show all the parameters that are available
for each plug-in. Two common places that system administrators like
to tweak are:
- Only allow specific networks: Say you have a cluster with a
high-speed interconnect (such as Myrinet or Infiniband) and an
ethernet network. The high-speed network is intended for MPI jobs;
the ethernet network is intended for NFS and other
administrative-level jobs. In this case, you can simply turn off Open
MPI's TCP support. The "btl" framework contains Open MPI's network
support; in this case, you want to disable the
tcp plug-in. You can
do this by adding the following line in the file
$prefix/etc/openmpi-mca-params.conf :
This tells Open MPI to load all BTL components except tcp .
Consider another example: your cluster has two TCP networks, one for
NFS and administration-level jobs, and another for MPI jobs. You can
tell Open MPI to ignore the TCP network used by NFS by adding the
following line in the file $prefix/etc/openmpi-mca-params.conf :
1
| btl_tcp_if_exclude = lo,eth0 |
The value of this parameter is the device names to exclude. In this
case, we're excluding lo (localhost, because Open MPI has its own
internal loopback device) and eth0 .
- Tune the parameters for specific networks: Each network plug-in
has a variety of different tunable parameters. Use the
ompi_info
command to see what is available. You show all available parameters
with:
1
| shell$ ompi_info --param all all |
NOTE: Starting with Open MPI v1.8, ompi_info categorizes
its parameters in so-called levels, as defined by
the MPI_T interface. You will need to specify --level 9
(or --all ) to show all MCA parameters. See
this blog entry
for further information.
1
| shell$ ompi_info --param all all --level 9 |
or
Beware: there are many variables available. You can limit the
output by showing all the parameters in a specific framework or in a
specific plug-in with the command line parameters:
1
| shell$ ompi_info --param btl all --level 9 |
Shows all the parameters of all BTL components, and:
1
| shell$ ompi_info --param btl tcp --level 9 |
Shows all the parameters of just the tcp BTL component.
8. I just added a new plugin to my Open MPI installation; do I need to recompile all my MPI apps? |
If your installation of Open MPI uses shared libraries and
components are standalone plug-in files, then no. If you add a new
component (such as support for a new network), Open MPI will simply
open the new plugin at run-time — your applications do not need to be
recompiled or re-linked.
9. I just upgraded my Myrinet|Infiniband network; do I need to
recompile all my MPI apps? |
If your installation of Open MPI uses shared libraries and
components are standalone plug-in files, then no. You simply need to
recompile the Open MPI components that support that network and
re-install them.
More specifically, Open MPI shifts the dependency on the underlying
network away from the MPI applications and to the Open MPI plug-ins.
This is a major advantage over many other MPI implementations.
MPI applications will simply open the new plugin when they run.
10. We just upgraded our version of Open MPI; do I need to
recompile all my MPI apps? |
It is unlikely. Most MPI applications solely interact with
Open MPI through the standardized MPI API and the constant values it
publishes in mpi.h . The MPI-2 API will not change until the MPI
Forum changes it.
We will try hard to make Open MPI's mpi.h stable such that the
values will not change from release-to-release. While we cannot
guarantee that they will stay the same forever, we'll try hard to make
it so.
11. I have an MPI application compiled for another MPI; will it
work with Open MPI? |
It is strongly unlikely. Open MPI does not attempt to
interface to other MPI implementations, nor executables that were
compiled for them. Sorry!
MPI applications need to be compiled and linked with Open MPI in order
to run under Open MPI.
|