clarkmpi@xxxxxxxxxxx wrote:
I have been looking for a tutorial on the setup procedure for an openMPI system on RHEL or Fedora.
I'm just trying to setup a test environment on 3 whitebox computers on a gigabit network.
Where to you put the information on nodes?
Is there special configuration for dual core and quad core systems?
In general there is a file that mpirun uses on startup that determines which
nodes and how many process per node are run, you can set different numbers of
process for different number of processors in a given node. Just make sure
that the application and the mpi version is in a directory that is shared to all
nodes, and then nothing has to be installed on the machine.
You *WILL* have to have directories shared between all of the machines, and
users/uid/gids matching, and ssh will need to be configured and working, you can
start things with daemons-but is mostly more trouble than it is worth if you
have under 500 cpus, above 500 cpus you have to use the daemons with some
versions of mpi.
It is generally called either a nodesfile or a hostfile and is within the given
user job, and specified on the mpirun or equivalent's command line. If you are
using a job queuing system (pbs/torque) with the proper options it will generate
the nodesfile and pass it in as an environment variable to the command that
starts the job.
The system wide config files are somewhat useless as they only allow you to run
things in one way, you cannot control the usage of different nodes for different
jobs, most don't use the system wide config files and they don't work properly.
Depending on how you configure openmpi it will differ exactly how you run the
job, and *NONE* of the mpi stuff is specific to *ANY* distribution (or even
specific to Linux vs Unix-the way you run openmpi configured a certain way is
the same on almost all Linux/Unix machines).
Roger