Hi,
I'm trying to run a code using OpenMPI and I'm getting this error:
ADIOI_GEN_DELETE (line 22): **io No such file or directory
I don't know why this occurs, I only know this happens when I use more
than one process.
The code can be found at: http://pastebin.com/m149a1302
--
Davi Vercillo C
SLIM H.A. wrote:
I have built the release candidate for ga-4.1 with OpenMPI 1.2.3 and
portland compilers 7.0.2 for Myrinet mx.
Which version of ARMCI and MX ?
ARMCI configured for 3 cluster nodes. Network protocol is 'MPI-SPAWN'.
0:Segmentation Violation error, status=: 11
0:ARMCI DASSERT fai
On Oct 19, 2008, at 7:05 PM, Wen Hao Wang wrote:
I have one cluster without Internet connection. I want to test
OpenMPI functions on it. It seems MTT can not be used. Do I have any
other choice for the testing?
You can always run tests manually. MTT is simply our harness for
automated
On Oct 21, 2008, at 9:14 AM, SLIM H.A. wrote:
I have built the release candidate for ga-4.1 with OpenMPI 1.2.3 and
portland compilers 7.0.2 for Myrinet mx.
Running the test.x for 3 Myrinet nodes each with 4 cores I get the
following error messages:
warning:regcache incompatible with malloc
lib
Also check the FAQ on how to use the wrapper compilers -- there are
ways to override at compile time, but be warned that it's not always
what you want. As Terry indicates, you probably want to have multiple
OMPI installations -- one for each compiler.
In particular, there are problems with
We could - though it isn't clear that it really accomplishes anything.
I believe some of the suggestions on this thread have forgotten about
singletons. If the code calls MPI_Init, we treat that as a singleton
and immediately set all the MPI environmental params - which means the
proc's env
Hi All,
(Sorry if you already got this message befor, but since I didn't get
any answer, I'm assuming it didn't get through to the list.)
I am trying to install OpenMPI in Cygwin. from a cygwin bash shell, I
configured OpenMPI with the command below:
$ echo $MPI_HOME
/home/seabra/local/openmpi-1
I wonder if it would be useful to have an OMPI-specific extension for
this kind of functionality, perhaps OMPI_Was_launched_by_mpirun() (or
something with a better name, etc.)...?
This would be a pretty easy function for us to provide (right
Ralph?). My question is -- would this (and perha
Am 22.10.2008 um 10:30 schrieb Jed Brown:
On Wed 2008-10-22 00:40, Reuti wrote:
Okay, now I see. Why not just call MPI_Comm_size(MPI_COMM_WORLD,
&nprocs) When nprocs is 1, it's a serial run. It can also be executed
when not running within mpirun AFAICS.
This is absolutely NOT okay. You cann
On Wed 2008-10-22 00:40, Reuti wrote:
>
> Okay, now I see. Why not just call MPI_Comm_size(MPI_COMM_WORLD,
> &nprocs) When nprocs is 1, it's a serial run. It can also be executed
> when not running within mpirun AFAICS.
This is absolutely NOT okay. You cannot call any MPI functions before
MPI
using 2 HCAs on the same PCI-Exp bus (as well as 2 ports from the same HCA)
will not improve performance, PCI-Exp is the bottleneck.
On Mon, Oct 20, 2008 at 2:28 AM, Mostyn Lewis wrote:
> Well, here's what I see with the IMB PingPong test using two ConnectX DDR
> cards
> in each of 2 machines.
11 matches
Mail list logo