MPI Software for PCI Express network
Message Passing Interface (MPI) is a well defined programming API for High Performance Computing applications. It provides rich set of communication primitives for efficient communication between processes running on SMPs and clusters interconnected by various types of networks.
MPI over SISCI
The SISCI API provides an well established programming API for shared memory applications. NICEVT created a SISCI transport for MPICH2, this code is open source and maintained by engineers from Dolphin and A3Cube.
The software is currently available for Windows and Linux and based on MPICH2-1.0.2p1
Download NMPI for Linux
The NMPI 1.4.0 source can be downloaded here
BUILDING NMPI for Linux
Building NMPI requires the SISCI-devel -package installed, to provide SISCI header files. Once it is built, it only requires the SISCI runtime.
Unpack the NMPI .tar.gz -file, and configure with
# ./configure --with-device=ch3:stream --with-sisci=/opt/DIS --prefix=/opt/NMPI
Here --with-devcice=ch3:stream enables the shared memory transport for MPICH2 which NMPI has forked, and --with-sisci points to the top-level installation directory for the Dolphin software. /opt/DIS is the default. --prefix specifies the top-level directory for the NMPI installation.
NMPI supports most MPICH2 configuration options - the example here will build NMPI with MPD run-management support.
and build with
# make install
The resulting /opt/NMPI (or different if chosen) should be distributed to each node that will run the MPI applications.
Once NMPI is installed, MPD needs to be configured. Please see src/pm/mpd/README for a full description. The line below will set up MPD for an unprivileged user.
# echo "MPD_SECRETWORD=anagrom-ataf" > ~/.mpd.conf
# chmod 700 ~/.mpd.conf
(for running MPD as root, set up the /etc/mpd.conf -file instead)
The .mpd.conf file cannot be readable for other users, and needs to be present on each node (nfs-mounted home-directory or similar)
On the job-management node (which does not need to be a cluster node), you also need to have a machinefile, which specifies the nodes to run on (including the job-management -node if it is part of the cluster);
# cat machine
RUNNING MPI jobs
MPI applications are started with mpiexec, which will contact the MPDs running on the nodes to start the actual job. NMPI provides the mpdboot and mpdallexit -tools to control the MPDs. Mpdboot uses ssh to start the MPDs on the other nodes, and further job management and application output are sent over traditional sockets.
A typical session for running 2 MPI-jobs will be
# /opt/NMPI/bin/mpdboot -n 4 -f machine
(this will cause the current user session to ssh to each of the 4 first nodes in the './machine' file and spawn an MPD process. Path-information from the configure-line is compiled into mpdboot. )
(mpdtrace will list running nodes - optional for troubleshooting)
# /opt/NMPI/bin/mpiexec -n 2 /opt/NMPI/examples/cpi
(this will run the 'cpi' test-application that comes with NMPI, using 2 processors, 1 from each of the two first of the mpdbooted nodes)
# /opt/NMPI/bin/mpiexec -n 8 /opt/NMPI/examples/cpi
(Same as above, using 8 processors, assigned round-robin on the nodes. You do not need to run on all nodes booted, nor are you limited to only running one process pr node.)
(this pulls down the started MPDs, terminating the session).
BUILDING MPI APPLICATIONS
NMPI comes with traditional compiler-wrappers - mpicc, mpicxx, mpif77 and mpif90 which handle include-paths and linking for MPI applications.
NMPI for Windows
The NMPI binaries are already available as an optional installation option in the Dolphin software Windows installers. Please download the appropriate Windows installer.
The header files, documentation and libraries are installed with the Development feature.
MPI over SuperSockets
The Dolphin SuperSockets or the TCPoPCIe software are also able to run all standard Linux MPI libraries that supports Ethernet. The performance highly depends on the Ethernet transport of the selected MPI library and will normally be significantly better than Ethernet, but not as fast as NMPI.
Please contact firstname.lastname@example.org with any questions.