Re: [OMPI users] problem with rankfile

2012-09-04 Thread Siegmar Gross
Hi, > Are *all* the machines Sparc? Or just the 3rd one (rs0)? Yes, both machines are Sparc. I tried first in a homogeneous environment. tyr fd1026 106 psrinfo -v Status of virtual processor 0 as of: 09/04/2012 07:32:14 on-line since 08/31/2012 15:44:42. The sparcv9 processor operates at 160

Re: [OMPI users] -hostfile ignored in 1.6.1 / SGE integration broken

2012-09-04 Thread Reuti
Am 04.09.2012 um 01:38 schrieb Ralph Castain: >>> W >>> Well that seems strange! Can you run 1.6.1 with the following on the mpirun >>> cmd line: >>> >>> -mca ras_gridengine_debug 1 -mca ras_gridengine_verbose 10 -mca >>> ras_base_verbose 10 > > I'll take a look at this and see what's going on

Re: [OMPI users] OMPI 1.6.x Hang on khugepaged 100% CPU time

2012-09-04 Thread Yevgeny Kliteynik
On 8/30/2012 10:28 PM, Yong Qin wrote: > On Thu, Aug 30, 2012 at 5:12 AM, Jeff Squyres wrote: >> On Aug 29, 2012, at 2:25 PM, Yong Qin wrote: >> >>> This issue has been observed on OMPI 1.6 and 1.6.1 with openib btl but >>> not on 1.4.5 (tcp btl is always fine). The application is VASP and >>> onl

Re: [OMPI users] what is a "node"?

2012-09-04 Thread Jeff Squyres
On Sep 1, 2012, at 7:33 AM, Zbigniew Koza wrote: > the new syntax works well (I used "man mpirun", which displayed the old > syntax). > Also, the report displayed by --report-binding is far more human-readable > than in previous versions of OpenMPI > > Out of curiosity, and also to supress the

Re: [OMPI users] OMPI 1.6.x Hang on khugepaged 100% CPU time

2012-09-04 Thread Yong Qin
On Tue, Sep 4, 2012 at 5:42 AM, Yevgeny Kliteynik wrote: > On 8/30/2012 10:28 PM, Yong Qin wrote: >> On Thu, Aug 30, 2012 at 5:12 AM, Jeff Squyres wrote: >>> On Aug 29, 2012, at 2:25 PM, Yong Qin wrote: >>> This issue has been observed on OMPI 1.6 and 1.6.1 with openib btl but not on 1.

Re: [OMPI users] some mpi processes "disappear" on a cluster of servers

2012-09-04 Thread David Warren
Which FORTRAN compiler are you using? I believe that most of them allow you to compile with -g and optimization and then force a stack dump on crash. I have found this to work on code that seems to vanish on random processors. Also, you might look at the FORTRAN options and see if it lets you a

[OMPI users] python-mrmpi() failed

2012-09-04 Thread mariana Vargas
Hi I 'am new in this, I have some codes that use mpi for python and I just installed (openmpi, mrmpi, mpi4py) in my home (from a cluster account) without apparent errors and I tried to perform this simple test in python and I get the following error related with openmpi, could you he