Re: [OMPI users] running with the dr pml.

2006-12-07 Thread Scott Atchley
On Dec 6, 2006, at 3:09 PM, Scott Atchley wrote: Brock and Galen, We are willing to assist. Our best guess is that OMPI is using the code in a way different than MPICH-GM does. One of our other developers who is more comfortable with the GM API is looking into it. We tried running with HPCC w

Re: [OMPI users] Any known issues with ksh?

2006-12-07 Thread Jeff Squyres
On Dec 5, 2006, at 12:32 PM, Rainer Keller wrote: To check, You may also use: ssh echo $SHELL Make sure you quote this properly so that $SHELL is not evaluated on the local node: ssh 'echo $SHELL' -- Jeff Squyres Server Virtualization Business Unit Cisco Systems

[OMPI users] configure problem using g77 on OSX 10.4

2006-12-07 Thread Thomas Spraggins
Not a lot of description I can provide. I'm trying to configure under OSX 10.4, using the g77 fortran compiler. Any hints will be most appreciated. Tom Spraggins t...@virginia.edu ompi-output.tar.gz Description: GNU Zip compressed data

Re: [OMPI users] running with the dr pml.

2006-12-07 Thread Brock Palen
That is wonderful, that fixes the observed problem for running with OB1. Has a bug for this been filed to get RDMA working on macs? The only working MPI lib is MPICH-GM as this problem happens with LAM-7.1.3 also. So on track for one bug. Would the person working on the DR PML like m

Re: [OMPI users] OpenMPE build failure

2006-12-07 Thread Ryan Thompson
Anthony, MPE is built and working. The errors I saw were only related to rlog and sample. Once I disabled them there were no more errors. Thanks for your help :-) Ryan On Dec 6, 2006, at 3:44 PM, Anthony Chan wrote: On Wed, 6 Dec 2006, Ryan Thompson wrote: Hi Anthony, I made some p

Re: [OMPI users] running with the dr pml.

2006-12-07 Thread George Bosilca
Something is not clear for me in this discussion. Sometimes the subject was the DR PML and sometimes the OB1 PML. In fact I'm completely in the dark ... Which PML fails the HPCC test on MAC ? When I look at the command line it look like it should be OB1 not DR ... george. On Dec 7, 2006

Re: [OMPI users] running with the dr pml.

2006-12-07 Thread Brock Palen
There were two issues here, one found the other. the OB1 works just fine on OSX on PPC64. the DR PML does not work, there is no output to STDOUT and the application while you can see the threads in 'top' no progress is ever made in running the application. The original problem stems

Re: [OMPI users] running with the dr pml.

2006-12-07 Thread George Bosilca
On Dec 7, 2006, at 2:45 PM, Brock Palen wrote: $ mpirun -np 4 -machinefile hosts -mca btl ^tcp -mca btl_gm_min_rdma_size $((10*1024*1024)) ./hpcc.ompi.gm and HPL passes. The problem seems to be in the RDMA fragmenting code on OSX. The boundary values at the edges of the fragments are not

Re: [OMPI users] running with the dr pml.

2006-12-07 Thread Brock Palen
On Dec 7, 2006, at 3:14 PM, George Bosilca wrote: On Dec 7, 2006, at 2:45 PM, Brock Palen wrote: $ mpirun -np 4 -machinefile hosts -mca btl ^tcp -mca btl_gm_min_rdma_size $((10*1024*1024)) ./hpcc.ompi.gm and HPL passes. The problem seems to be in the RDMA fragmenting code on OSX. The bound

Re: [OMPI users] running with the dr pml.

2006-12-07 Thread Scott Atchley
George, Using DR was suggested to see if it could find an error. The original problem was using OB1, and HPL gave failed residuals. The hope was that DR would pinpoint any problems. It did not and HPL did not progress at all (the GM counters incremented, but no tests were completed succes

Re: [OMPI users] myrinet problems on OSX

2006-12-07 Thread Reese Faucette
This is due to a problem in (void *)->(uint64_t_ conversion in OMPI. The following patch fixes the problem, as would an appropriate cast of pval, I suspect. The problem is an inappropriate use of ompi_ptr_t. I would guess that other uses of lval might be suspect also (such as in the Portals c

[OMPI users] OpenMPI for 32/64 bit IB drivers

2006-12-07 Thread Aaron McDonough
Thanks Jeff. It turns out that all our IB blades are EM64T - it's just that some have i686 OS's and some x86_64 OS's. So I think we'll move to all x86_64 installs on IB hosts. I guess if we make the OpenMPI a 32-bit build, and link against 32-bit IB drivers (my interpretation of the release no

Re: [OMPI users] OpenMPI for 32/64 bit IB drivers

2006-12-07 Thread Jeff Squyres
On Dec 7, 2006, at 5:04 PM, Aaron McDonough wrote: It turns out that all our IB blades are EM64T - it's just that some have i686 OS's and some x86_64 OS's. So I think we'll move to all x86_64 installs on IB hosts. I guess if we make the OpenMPI a 32-bit build, and link against 32-bit IB dri