Re: [OMPI users] Pointers for understanding failure messages on NetBSD

2009-12-03 Thread Kevin . Buckley
>> I have actually already taken the IPv6 block and simply tried to >> replace any IPv6 stuff with IPv4 "equivalents", eg: > > At the risk of showing a lot of ignorance, here's the block I coddled > together based on the IPv6 block. > > I have tried to keep it looking as close to the original IPv6

Re: [OMPI users] Dynamic Symbol Relocation in Plugin Shared Library

2009-12-03 Thread Jeff Squyres
What version of Open MPI are you using? We just made a minor-but-potentially-important change to how we handle our dlopen code in 1.3.4. Additionally, you might try configuring Open MPI with the --disable-dlopen configure switch. This switch does two things: 1. Slurps all of Open MPI's plugi

[OMPI users] Dynamic Symbol Relocation in Plugin Shared Library

2009-12-03 Thread Cupp, Matthew R
Hi, I'm having an issue with the MPI version of application and the dynamic relocation of symbols from plugin shared libraries. There are duplicate symbols in both the main executable (Engine) and a shared library that opened at runtime using dlopen (Plugin). The plugin is opened with the com

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread Eugene Loh
Jeff Squyres wrote: On Dec 3, 2009, at 10:56 AM, Brock Palen wrote: The allocation statement is ok: allocate(vec(vec_size,vec_per_proc*(size-1))) This allocates memory vec(32768, 2350) So this allocates 32768 rows, each with 2350 columns -- all stored contiguously in mem

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread Jed Brown
On Thu, 3 Dec 2009 12:21:50 -0500, Jeff Squyres wrote: > On Dec 3, 2009, at 10:56 AM, Brock Palen wrote: > > > The allocation statement is ok: > > allocate(vec(vec_size,vec_per_proc*(size-1))) > > > > This allocates memory vec(32768, 2350) It's easier to translate to C rather than trying to rea

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread Jeff Squyres
On Dec 3, 2009, at 10:56 AM, Brock Palen wrote: > The allocation statement is ok: > allocate(vec(vec_size,vec_per_proc*(size-1))) > > This allocates memory vec(32768, 2350) So this allocates 32768 rows, each with 2350 columns -- all stored contiguously in memory, in column-major order. Does th

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread Eugene Loh
Ashley Pittman wrote: On Wed, 2009-12-02 at 13:11 -0500, Brock Palen wrote: On Dec 1, 2009, at 11:15 AM, Ashley Pittman wrote: On Tue, 2009-12-01 at 10:46 -0500, Brock Palen wrote: The attached code, is an example where openmpi/1.3.2 will lock up, if ran on 48 cores, of IB (4 c

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread vasilis gkanis
I had a similar problem with the portland Fortran compiler. I new that this was not caused by a network problem ( I run the code on a single node with 4 CPUs). After I tested pretty much anything, I decided to change the compiler. I used the Intel Fortran compiler and everything is running fine.

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread Brock Palen
On Dec 1, 2009, at 8:09 PM, John R. Cary wrote: Jeff Squyres wrote: (for the web archives) Brock and I talked about this .f90 code a bit off list -- he's going to investigate with the test author a bit more because both of us are a bit confused by the F90 array syntax used. Jeff, I talke

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread Richard Treumann
MPI standard compliant management of eager send requires that this program work. There is nothing that says "unless eager limit is set too high/low." Honoring this requirement in an MPI implementation can be costly. There are practical reasons to pass up this requirement because most applications

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-03 Thread Ashley Pittman
On Wed, 2009-12-02 at 13:11 -0500, Brock Palen wrote: > On Dec 1, 2009, at 11:15 AM, Ashley Pittman wrote: > > On Tue, 2009-12-01 at 10:46 -0500, Brock Palen wrote: > >> The attached code, is an example where openmpi/1.3.2 will lock up, if > >> ran on 48 cores, of IB (4 cores per node), > >> The co

Re: [OMPI users] exceedingly virtual memory consumption of MPI, environment if higher-setting "ulimit -s"

2009-12-03 Thread Paul Kapinos
Hi Jeff, hi all, I can't think of what OMPI would be doing related to the predefined stack size -- I am not aware of anywhere in the code where we look up the predefine stack size and then do something with it. I do not know OMPI code at all - but what I see is the consumption of virtual me

[OMPI users] Mimicking timeout for MPI_Wait

2009-12-03 Thread Katz, Jacob
Hi, I wonder if there is a BKM (efficient and portable) to mimic a timeout with a call to MPI_Wait, i.e. to interrupt it once a given time period has passed if it hasn't returned by then yet. I'll appreciate if anyone may send a pointer/idea. Thanks. Jacob M. Kat

Re: [OMPI users] MPI_Comm_spawn lots of times

2009-12-03 Thread Nicolas Bock
That was quick. I will try the patch as soon as you release it. nick On Wed, Dec 2, 2009 at 21:06, Ralph Castain wrote: > Patch is built and under review... > > Thanks again > Ralph > > On Dec 2, 2009, at 5:37 PM, Nicolas Bock wrote: > > Thanks > > On Wed, Dec 2, 2009 at 17:04, Ralph Castain