Re: [O-MPI users] HPL & HPCC: Wedged

2005-10-25 Thread Troy Telford
Thanks; this workaround does allow it to complete its run. On Tue, 25 Oct 2005 10:19:54 -0600, Galen M. Shipman wrote: Correction: HPL_NO_DATATYPE should be: HPL_NO_MPI_DATATYPE. - Galen On Oct 25, 2005, at 10:13 AM, Galen M. Shipman wrote: Hi Troy, Sorry for the delay, I am now able t

Re: [O-MPI users] Correction to 'Questions about mpif90/g95'

2005-10-25 Thread Charles Williams
Hi Jeff,Recreating the problem will probably be a bit convoluted, but should be possible.  Things are also complicated by the fact that I'm using the current developers' version of petsc.  I am presently getting around the problem by reverting to an older version of g95, but I believe that this is

[O-MPI users] Ask the Cluster Expert

2005-10-25 Thread Nick I
Hi, Thanks to the response from many in the community I have added sections about diskless clusters and information on 32-bit and 64-bit processors at the site I help run, www.ClusterBuilder.org.I also added a section called Ask the Cluster Expert ( http://www.cluste

Re: [O-MPI users] HPL & HPCC: Wedged

2005-10-25 Thread Galen M. Shipman
Correction: HPL_NO_DATATYPE should be: HPL_NO_MPI_DATATYPE. - Galen On Oct 25, 2005, at 10:13 AM, Galen M. Shipman wrote: Hi Troy, Sorry for the delay, I am now able to reproduce this behavior when I do not specify HPL_NO_DATATYPE. If I do specify HPL_NO_DATATYPE the run completes. We will be

Re: [O-MPI users] HPL & HPCC: Wedged

2005-10-25 Thread Galen M. Shipman
Hi Troy, Sorry for the delay, I am now able to reproduce this behavior when I do not specify HPL_NO_DATATYPE. If I do specify HPL_NO_DATATYPE the run completes. We will be looking into this now. Thanks, Galen On Oct 21, 2005, at 5:03 PM, Troy Telford wrote: I've been trying out the RC4

Re: [O-MPI users] thread support

2005-10-25 Thread Jeff Squyres
Hugh -- We are actually unable to replicate the problem; we've run some single-threaded and multi-threaded apps with no problems. This is unfortunately probably symptomatic of bugs that are still remaining in the code. :-( Can you try disabling MPI progress threads (I believe that tcp may