Hi
I have an app.ac1 file like below:
[tsakai@vixen local]$ cat app.ac1
-H vixen.egcrc.org -np 1 Rscript
/Users/tsakai/Notes/R/parallel/Rmpi/local/fib.R 5
-H vixen.egcrc.org -np 1 Rscript
/Users/tsakai/Notes/R/parallel/Rmpi/local/fib.R 6
-H blitzen.egcrc.org -np 1 Rscript
/U
On Wed, 9 Feb 2011, Jeremiah Willcock wrote:
I get the following Open MPI error from 1.4.1:
*** An error occurred in MPI_Bcast
*** on communicator MPI COMMUNICATOR 3 SPLIT FROM 0
*** MPI_ERR_IN_STATUS: error code in status
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
(hostname and po
I get the following Open MPI error from 1.4.1:
*** An error occurred in MPI_Bcast
*** on communicator MPI COMMUNICATOR 3 SPLIT FROM 0
*** MPI_ERR_IN_STATUS: error code in status
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
(hostname and port removed from each line). There is no MPI_St
Gus is correct - the -host option needs to be in the appfile
On Feb 9, 2011, at 3:32 PM, Gus Correa wrote:
> Sindhi, Waris PW wrote:
>> Hi,
>>I am having trouble using the --app option with OpenMPI's mpirun
>> command. The MPI processes launched with the --app option get launched
>> on the l
You may have mentioned this in a prior mail, but what version are you using?
I tested and am unable to replicate your problem -- my openmpi-mca-params.conf
file is always read.
Double check the value of your mca_param_files MCA parameter:
shell$ ompi_info --param mca param_files
Mine comes out
Thanks Terry.
Unfortunately, -fno-omit-frame-pointer is the default for the Intel compiler
when -g is used, which I am using since it is necessary for source level
debugging. So the compiler kindly tells me that it is ignoring your suggested
option when I specify it. :)
Also, since I can rep
Sindhi, Waris PW wrote:
Hi,
I am having trouble using the --app option with OpenMPI's mpirun
command. The MPI processes launched with the --app option get launched
on the linux node that mpirun command is executed on.
The same MPI executable works when specified on the command line using
the
This sounds like something I ran into some time ago that involved the
compiler omitting frame pointers. You may want to try to compile your
code with -fno-omit-frame-pointer. I am unsure if you may need to do
the same while building MPI though.
--td
On 02/09/2011 02:49 PM, Dennis McRitchie
Hi,
I'm encountering a strange problem and can't find it having been discussed on
this mailing list.
When building and running my parallel program using any recent Intel compiler
and OpenMPI 1.2.8, TotalView behaves entirely correctly, displaying the
"Process mpirun is a parallel job. Do you w
Hi,
I am having trouble using the --app option with OpenMPI's mpirun
command. The MPI processes launched with the --app option get launched
on the linux node that mpirun command is executed on.
The same MPI executable works when specified on the command line using
the -np option.
Please let
On Feb 8, 2011, at 8:21 PM, Ralph Castain wrote:
I would personally suggest not reconfiguring your system simply to
support a particular version of OMPI. The only difference between
the 1.4 and 1.5 series wrt slurm is that we changed a few things to
support a more recent version of slurm. I
It looks like the logic in the configure script is turning on the FT thread for
you when you specify both '--with-ft=cr' and '--enable-mpi-threads'.
Can you send me the output of 'ompi_info'? Can you also try the MCA parameter
that I mentioned earlier to see if that changes the performance?
I
Hi Josh,
Thanks for the reply. I did not use the '--enable-ft-thread' option. Here is
my build options:
CFLAGS=-g \
./configure \
--with-ft=cr \
--enable-mpi-threads \
--with-blcr=/home/nguyen/opt/blcr \
--with-blcr-libdir=/home/nguyen/opt/blcr/lib \
--prefix=/home/nguyen/opt/openmpi \
--with-open
13 matches
Mail list logo