Hi all, I am setting up a 7+1 nodes cluster for MD simulation, specifically using GROMACS. I am using Ubuntu Lucid 64-bit on all machines. Installed gromacs, gromacs-openmpi, and gromacs-mpich from the repository. MPICH version of gromacs runs fine without any error. However, when I ran OpenMPI version of gromacs by
########################################################################### mpirun.openmpi -np 8 -wdir /home/birg/Desktop/nfs/ -hostfile ~/Desktop/mpi_settings/hostfile mdrun_mpi.openmpi -v ########################################################################### an error occur, something like this ########################################################################### [birg-desktop-10:02101] Error: unknown option "--daemonize" Usage: orted [OPTION]... Start an Open RTE Daemon --bootproxy <arg0> Run as boot proxy for <job-id> -d|--debug Debug the OpenRTE -d|--spin Have the orted spin until we can connect a debugger to it --debug-daemons Enable debugging of OpenRTE daemons --debug-daemons-file Enable debugging of OpenRTE daemons, storing output in files --gprreplica <arg0> Registry contact information. -h|--help This help message --mpi-call-yield <arg0> Have MPI (or similar) applications call yield when idle --name <arg0> Set the orte process name --no-daemonize Don't daemonize into the background --nodename <arg0> Node name as specified by host/resource description. --ns-nds <arg0> set sds/nds component to use for daemon (normally not needed) --nsreplica <arg0> Name service contact information. --num_procs <arg0> Set the number of process in this job --persistent Remain alive after the application process completes --report-uri <arg0> Report this process' uri on indicated pipe --scope <arg0> Set restrictions on who can connect to this universe --seed Host replicas for the core universe services --set-sid Direct the orted to separate from the current session --tmpdir <arg0> Set the root for the session directory tree --universe <arg0> Set the universe name as username@hostname:universe_name for this application --vpid_start <arg0> Set the starting vpid for this job -------------------------------------------------------------------------- A daemon (pid 5598) died unexpectedly with status 251 while attempting to launch so we are aborting. There may be more information reported by the environment (see above). This may be because the daemon was unable to find all the needed shared libraries on the remote node. You may set your LD_LIBRARY_PATH to have the location of the shared libraries on the remote nodes and this will automatically be forwarded to the remote nodes. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun.openmpi noticed that the job aborted, but has no info as to the process that caused that situation. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun.openmpi was unable to cleanly terminate the daemons on the nodes shown below. Additional manual cleanup may be required - please refer to the "orte-clean" tool for assistance. -------------------------------------------------------------------------- birg-desktop-04 - daemon did not report back when launched birg-desktop-07 - daemon did not report back when launched birg-desktop-10 - daemon did not report back when launched ########################################################################### It is strange that it only happen on one of the compute node (birg-desktop-10). If I remove birg-desktop-10 from the hostfile, I can run OpenMPI gromacs successfully. Any idea? Thanks. -- Regards, THChew