I may have deleted any responses to this message. In either case, we appear
to have fixed the problem
by installing a more current version of openmpi.
On Thu, Feb 14, 2013 at 2:27 PM, Erik Nelson wrote:
>
> I'm encountering an error using qsub that none of us can figure out. MPI
> C++ programs
Hi again,
I managed to reproduce the "bug" with a simple case (see the cpp file
attached).
I am running this on 2 nodes with 8 cores each. If I run with
mpiexec ./test-mpi-latency.out
then the MPI_Ssend operations take about ~1e-5 second for intra-node
ranks, and ~11 seconds for inter-node ra
Looks to me like you are really saying that taskset didn't do what you expected
- with that cmd line, OMPI didn't do anything to bind your procs. It just
launched "taskset".
On Feb 15, 2013, at 11:34 AM, Kranthi Kumar wrote:
> With Open MPI this is the command I used:
>
> mpirun -n 6 taskset
With Open MPI this is the command I used:
mpirun -n 6 taskset -c 0,2,4,6,8,10 ./a.out
With intel library I set environment variable I_MPI_PIN_MAPPING=6:0 0,1
2,2 4,3 6,4 8,5 10
and ran by saying
mpirun -n 6 ./a.out
On Fri, Feb 15, 2013 at 10:30 PM, wrote:
> Send users mailing list submissions
Hi again,
I found out that if I add an
MPI_Barrier after the MPI_Recv part, then there is no minute-long latency.
Is it possible that even if MPI_Recv returns, the openib btl does not
guarantee that the acknowledgement is sent promptly ? In other words, is
it possible that the computation follo
+1
In addition, you might want to upgrade to Open MPI 1.6.x -- 1.4.x is fairly
ancient. 1.6.x's mpirun also has a --report-bindings option which tells you
where procs are bound. For example:
mpirun --bind-to-core --report-bindings ...etc.
On Feb 15, 2013, at 11:46 AM, Brice Goglin wrote
IntelMPI binds processes by default, while OMPI doesn't. What's your
mpiexec/mpirun command-line?
Brice
Le 15/02/2013 17:34, Kranthi Kumar a écrit :
> Hello Sir
>
> Here below is the code which I wrote using hwloc for getting the
> bindings of the processes.
> I tested this code on SDSC Gordon
Hello Sir
Here below is the code which I wrote using hwloc for getting the bindings
of the processes.
I tested this code on SDSC Gordon Super Computer which has Open MPI 1.4.3
and on TACC Stampede which uses intel's MPI library IMPI.
With Open MPI I get all the core ids for all the processes as 0.
This typically means that your Intel C++ compiler installation is borked.
Did you look at config.log to see the specific error? Are you able to compile
trivial C++ programs with icpc?
Please send all the information listed here:
http://www.open-mpi.org/community/help/
On Feb 15, 2013, at
Hi.
I am trying to compile OpenMPi for an Intel Compiler. This is causing me some
issues.
Link for config.txt http://homes.ist.aau.dk/mb/config.txt
The ./configure crashes when aligning bool for the C++ compiler:
./configure CC=icc CXX=icpc F77=ifort FC=ifort
--prefix=/pack/openmpi-1.6.3-intel
10 matches
Mail list logo