[OMPI users] MPI-Send for entire entire matrix when allocating memory dynamically

2009-10-28 Thread Natarajan CS
Hello all, Firstly, My apologies for a duplicate post in LAM/MPI list I have the following simple MPI code. I was wondering if there was a workaround for sending a dynamically allocated 2-D matrix? Currently I can send the matrix row-by-row, however, since rows are not contiguous I cannot s

Re: [OMPI users] problem calling mpirun from script invoked with mpirun

2009-10-28 Thread Luke Shulenburger
Thanks, That's what I wanted to know. And thanks for all the help! Luke On Wed, Oct 28, 2009 at 9:06 PM, Ralph Castain wrote: > I see. No, we don't copy your envars and ship them to remote nodes. Simple > reason is that we don't know which ones we can safely move, and which would > cause probl

Re: [OMPI users] problem calling mpirun from script invoked with mpirun

2009-10-28 Thread Ralph Castain
I see. No, we don't copy your envars and ship them to remote nodes. Simple reason is that we don't know which ones we can safely move, and which would cause problems. However, we do provide a mechanism for you to tell us which envars to move. Just add: -x LD_LIBRARY_PATH to your mpirun cmd line

Re: [OMPI users] compiling openmpi with mixed CISCOinfiniband.cardand Mellanox infiniband cards.

2009-10-28 Thread Jeff Squyres
On Oct 28, 2009, at 7:25 PM, nam kim wrote: > Does that make sense? It makes sense. If you're unable to upgrade to OFED, you can probably just try those cards with that stack and see what happens. It will *likely* work, but that's somewhat of a guess. Like I mentioned earlier, OMPI d

Re: [OMPI users] compiling openmpi with mixed CISCO infiniband.cardand Mellanox infiniband cards.

2009-10-28 Thread nam kim
On Wed, Oct 28, 2009 at 1:09 PM, Jeff Squyres wrote: > On Oct 28, 2009, at 1:08 PM, nam kim wrote: > >> Head node and other computing nodes have topspin-ib-rhel4-3.2.0-118 >> installed with CISCO IB card (HCA-320-A1). >> > > Is there a reason you're not using OFED?  OFED is *much* more modern and

Re: [OMPI users] problem calling mpirun from script invoked with mpirun

2009-10-28 Thread Luke Shulenburger
My apologies for not being clear. These variables are set in my environment, they just are not published to the other nodes in the cluster when the jobs are run through the scheduler. At the moment, even though I can use mpirun to run jobs locally on the head node without touching my environment,

Re: [OMPI users] problem calling mpirun from script invoked with mpirun

2009-10-28 Thread Ralph Castain
Normally, one does simply set the ld_library_path in your environment to point to the right thing. Alternatively, you could configure OMPI with --enable-mpirun-prefix-by-default This tells OMPI to automatically add the prefix you configured the system with to your ld_library_path and path envars.

Re: [OMPI users] problem calling mpirun from script invoked with mpirun

2009-10-28 Thread Luke Shulenburger
Thanks for the quick reply. This leads me to another issue I have been having with openmpi as it relates to sge. The "tight integration" works where I do not have to give mpirun a hostfile when I use the scheduler, but it does not seem to be passing on my environment variables. Specifically beca

Re: [OMPI users] compiling openmpi with mixed CISCO infiniband.cardand Mellanox infiniband cards.

2009-10-28 Thread Jeff Squyres
On Oct 28, 2009, at 1:08 PM, nam kim wrote: Head node and other computing nodes have topspin-ib-rhel4-3.2.0-118 installed with CISCO IB card (HCA-320-A1). Is there a reason you're not using OFED? OFED is *much* more modern and has many more features than the old Cisco/Topspin IB driver s

Re: [OMPI users] problem calling mpirun from script invoked with mpirun

2009-10-28 Thread Ralph Castain
I'm afraid we have never really supported this kind of nested invocations of mpirun. If it works with any version of OMPI, it is totally a fluke - it might work one time, and then fail the next. The problem is that we pass envars to the launched processes to control their behavior, and these confl

[OMPI users] problem calling mpirun from script invoked with mpirun

2009-10-28 Thread Luke Shulenburger
Hello, I am having trouble with a script that calls mpi. Basically my problem distills to wanting to call a script with: mpirun -np # ./script.sh where script.sh looks like: #!/bin/bash mpirun -np 2 ./mpiprogram Whenever I invoke script.sh normally (as ./script.sh for instance) it works fine, b

Re: [OMPI users] With IMPI works fine,With OMPI fails

2009-10-28 Thread Matthew Erickson
An "Internal compiler error" indicates a bug in Intel Fortran (a segfault in this case), and not in anything the compiler is trying to build- if the code you're building has an error, the compiler should properly print out an error statement. You should forward this along to Intel. > -Origina

[OMPI users] With IMPI works fine,With OMPI fails

2009-10-28 Thread Sangamesh B
Hi all, The compilation of a fortran application - CPMD-3.13.2 - with OpenMP + OpenMPI-1.3.3 + ifort-10.1 + MKL-10.0 is failing with following error on a Rocks-5.1 Linux cluster: /lib/cpp -P -C -traditional -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8 -DLINUX_IFC -DPARALLEL -DMYRINET ./potfor

Re: [OMPI users] compiling openmpi with mixed CISCO infiniband. cardand Mellanox infiniband cards.

2009-10-28 Thread nam kim
Jeff, Thank you for your reply! Further question, Head node and other computing nodes have topspin-ib-rhel4-3.2.0-118 installed with CISCO IB card (HCA-320-A1). Our new nodes has mellanox IB card (MHRH19-XTC). My question is how to compile openmpi with heterogenous IB cards? I used to compile

[OMPI users] checkpoint opempi-1.3.3+sge62

2009-10-28 Thread Sergio Díaz
Hello, I have achieved the checkpoint of an easy program without SGE. Now, I'm trying to do the integration openmpi+sge but I have some problems... When I try to do checkpoint of the mpirun PID, I got an error similar to the error gotten when the PID doesn't exit. The example below. Any idea