Re: [OMPI users] best way to ALLREDUCE multi-dimensional arrays in Fortran?

2009-09-25 Thread Martin Siegert
On Fri, Sep 25, 2009 at 10:12:33PM -0400, Greg Fischer wrote: > >It looks like the buffering operations consume about 15% as much time >as the allreduce operations. Not huge, but not trivial, all the same. >Is there any way to avoid the buffering step? That depends on how you allocat

Re: [OMPI users] best way to ALLREDUCE multi-dimensional arrays in Fortran?

2009-09-25 Thread Greg Fischer
It looks like the buffering operations consume about 15% as much time as the allreduce operations. Not huge, but not trivial, all the same. Is there any way to avoid the buffering step? On Thu, Sep 24, 2009 at 6:03 PM, Eugene Loh wrote: > Greg Fischer wrote: > > (I apologize in advance for

Re: [OMPI users] error in checkpointing in open mpi

2009-09-25 Thread Joshua Hursey
On Sep 25, 2009, at 7:10 AM, Mallikarjuna Shastry wrote: dear sir i am sending the details as follows 1. i am using openmpi-1.3.3 and blcr 0.8.2 2. i have installed blcr 0.8.2 first under /root/MS 3. then i installed openmpi 1.3.3 under /root/MS 4 i have configured and installed open mpi as

[OMPI users] [btl_openib_component.c:1373:btl_openib_component_progress] error polling HP CQ with -2 errno says Success

2009-09-25 Thread Charles Wright
Hello, I just got some new cluster hardware :) :( I can't seem to overcome an openib problem I get this at run time error polling HP CQ with -2 errno says Success I've tried 2 different IB switches and multiple sets of nodes all on one switch or the other to try to eliminate the ha

[OMPI users] Help tracing casue of readv errors

2009-09-25 Thread Pacey, Mike
One my users recently reported random hangs of his OpenMPI application. I've run some tests using multiple 2-node 16-core runs of the IMB benchmark and can occasionally replicate the problem. Looking through the mail archive, a previous occurrence of this error seems to been suspect code, but as i

[OMPI users] segfault on finalize

2009-09-25 Thread Thomas Ropars
Hi, I'm using r21970 of the trunk on Linux 2.6.18-3-amd64 and gcc version 4.2.3 (Debian 4.2.3-2). When I compile open mpi with the default options, it works. But if I use --with-platform=optimized option, then I get a segfault for every program I run. ==3073== Access not within mapped reg

[OMPI users] "Failed to find the following executable" problem under Torque

2009-09-25 Thread Blosch, Edwin L
I'm having a problem running OpenMPI under Torque. It complains like there is a command syntax problem, but the three variations below are all correct, best I can tell using mpirun -help. The environment in which the command executes, i.e. PATH and LD_LIBRARY_PATH, is correct. Torque is 2.3.x

Re: [OMPI users] Multi-threading with OpenMPI ?

2009-09-25 Thread Richard Treumann
MPI_COMM_SELF is one example. The only task it contains is the local task. The other case I had in mind is where there is a master doing all spawns. Master is launched as an MPI "job" but it has only one task. In that master, even MPI_COMM_WORLD is what I called a "single task communicator". Be

[OMPI users] error in checkpointing in open mpi

2009-09-25 Thread Mallikarjuna Shastry
dear sir i am sending the details as follows 1. i am using openmpi-1.3.3 and blcr 0.8.2 2. i have installed blcr 0.8.2 first under /root/MS 3. then i installed openmpi 1.3.3 under /root/MS 4 i have configured and installed open mpi as follows #./configure --with-ft=cr --enable-mpi-thread

[OMPI users] (no subject)

2009-09-25 Thread Mallikarjuna Shastry
dear sir i am sending the details as follows 1. i am using openmpi-1.3.3 and blcr 0.8.2 2. i have installed blcr 0.8.2 first under /root/MS 3. then i installed openmpi 1.3.3 under /root/MS 4 i have configured and installed open mpi as follows #./configure --with-ft=cr --enable-mpi-threads --wi

Re: [OMPI users] Multi-threading with OpenMPI ?

2009-09-25 Thread Ashika Umanga Umagiliya
Thank you Dick for your detailed reply, I am sorry, could you explain more what you meant by "unless you are calling MPI_Comm_spawn on a single task communicator you would need to have a different input communicator for each thread that will make an MPI_Comm_spawn call" , i am confused with th