Hello all,
Firstly, My apologies for a duplicate post in LAM/MPI list I have
the following simple MPI code. I was wondering if there was a workaround for
sending a dynamically allocated 2-D matrix? Currently I can send the matrix
row-by-row, however, since rows are not contiguous I cannot s
Thanks,
That's what I wanted to know. And thanks for all the help!
Luke
On Wed, Oct 28, 2009 at 9:06 PM, Ralph Castain wrote:
> I see. No, we don't copy your envars and ship them to remote nodes. Simple
> reason is that we don't know which ones we can safely move, and which would
> cause probl
I see. No, we don't copy your envars and ship them to remote nodes. Simple
reason is that we don't know which ones we can safely move, and which would
cause problems.
However, we do provide a mechanism for you to tell us which envars to move.
Just add:
-x LD_LIBRARY_PATH
to your mpirun cmd line
On Oct 28, 2009, at 7:25 PM, nam kim wrote:
> Does that make sense?
It makes sense.
If you're unable to upgrade to OFED, you can probably just try those
cards with that stack and see what happens. It will *likely* work,
but that's somewhat of a guess.
Like I mentioned earlier, OMPI d
On Wed, Oct 28, 2009 at 1:09 PM, Jeff Squyres wrote:
> On Oct 28, 2009, at 1:08 PM, nam kim wrote:
>
>> Head node and other computing nodes have topspin-ib-rhel4-3.2.0-118
>> installed with CISCO IB card (HCA-320-A1).
>>
>
> Is there a reason you're not using OFED? OFED is *much* more modern and
My apologies for not being clear. These variables are set in my
environment, they just are not published to the other nodes in the
cluster when the jobs are run through the scheduler. At the moment,
even though I can use mpirun to run jobs locally on the head node
without touching my environment,
Normally, one does simply set the ld_library_path in your environment to
point to the right thing. Alternatively, you could configure OMPI with
--enable-mpirun-prefix-by-default
This tells OMPI to automatically add the prefix you configured the system
with to your ld_library_path and path envars.
Thanks for the quick reply. This leads me to another issue I have
been having with openmpi as it relates to sge. The "tight
integration" works where I do not have to give mpirun a hostfile when
I use the scheduler, but it does not seem to be passing on my
environment variables. Specifically beca
On Oct 28, 2009, at 1:08 PM, nam kim wrote:
Head node and other computing nodes have topspin-ib-rhel4-3.2.0-118
installed with CISCO IB card (HCA-320-A1).
Is there a reason you're not using OFED? OFED is *much* more modern
and has many more features than the old Cisco/Topspin IB driver
s
I'm afraid we have never really supported this kind of nested invocations of
mpirun. If it works with any version of OMPI, it is totally a fluke - it
might work one time, and then fail the next.
The problem is that we pass envars to the launched processes to control
their behavior, and these confl
Hello,
I am having trouble with a script that calls mpi. Basically my
problem distills to wanting to call a script with:
mpirun -np # ./script.sh
where script.sh looks like:
#!/bin/bash
mpirun -np 2 ./mpiprogram
Whenever I invoke script.sh normally (as ./script.sh for instance) it
works fine, b
An "Internal compiler error" indicates a bug in Intel Fortran (a segfault
in this case), and not in anything the compiler is trying to build- if the
code you're building has an error, the compiler should properly print out
an error statement.
You should forward this along to Intel.
> -Origina
Hi all,
The compilation of a fortran application - CPMD-3.13.2 - with OpenMP +
OpenMPI-1.3.3 + ifort-10.1 + MKL-10.0 is failing with following error on a
Rocks-5.1 Linux cluster:
/lib/cpp -P -C -traditional -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8
-DLINUX_IFC -DPARALLEL -DMYRINET ./potfor
Jeff,
Thank you for your reply!
Further question,
Head node and other computing nodes have topspin-ib-rhel4-3.2.0-118
installed with CISCO IB card (HCA-320-A1).
Our new nodes has mellanox IB card (MHRH19-XTC). My question is how
to compile openmpi with heterogenous IB cards?
I used to compile
Hello,
I have achieved the checkpoint of an easy program without SGE. Now, I'm
trying to do the integration openmpi+sge but I have some problems...
When I try to do checkpoint of the mpirun PID, I got an error similar to
the error gotten when the PID doesn't exit. The example below.
Any idea
15 matches
Mail list logo