Hi All,
just to expand on this guess ...
On Thu, Dec 02, 2010 at 05:40:53PM -0500, Gus Correa wrote:
> Hi All
>
> I wonder if configuring OpenMPI while
> forcing the default types to non-default values
> (-fdefault-integer-8 -fdefault-real-8) might have
> something to do with the segmentation fa
Hi All
I wonder if configuring OpenMPI while
forcing the default types to non-default values
(-fdefault-integer-8 -fdefault-real-8) might have
something to do with the segmentation fault.
Would this be effective, i.e., actually make the
the sizes of MPI_INTEGER/MPI_INT and MPI_REAL/MPI_FLOAT bigg
http://www.open-mpi.org/faq/?category=running#oversubscribing
On 12/03/2010 06:25 AM, Price, Brian M (N-KCI) wrote:
Additional testing seems to show that the problem is related to barriers and
how often they poll to determine whether or not it's time to leave. Is there
some MCA parameter or
Do you get a corefile?
It looks like you're calling MPI_RECV in Fortran and then it segv's. This is
*likely* because you're either passing a bad parameter or your buffer isn't big
enough. Can you double check all your parameters?
Unfortunately, there's no line numbers printed in the stack tra
Additional testing seems to show that the problem is related to barriers and
how often they poll to determine whether or not it's time to leave. Is there
some MCA parameter or environment variable that allows me to control the
frequency of polling while in barriers?
Thanks,
Brian Price
From: u
Hi Jeff
I am glad this question was asked.
Thanks to whoever did it.
Acronyms are always a pain, particularly if you don't know them,
and they are in no dictionary.
OFUD, OFED, OPENIB, MCA, BTL, SM, OOB, ... the list goes on and on.
Your answer makes a great start for another FAQ entry,
called,
On Dec 2, 2010, at 3:59 AM, 阚圣哲 wrote:
> When I use openmpi mpirun --mca btl , I find arg1 can be ofud, self,
> sm, openib, but www.open-mpi.org desn't explain those args.
"BTL" stands for "byte transfer layer" -- is the lowest networking software
layer for the "ob1" MPI transport in Open MPI
Hi,
I am using DRAGON, a neutronic simulation code in FORTRAN77 that has its own
datastructures. I added a module to send these data structures thanks to
MPI_SEND / MPI_RECEIVE, and everything worked perfectly for a while.
Then I had to raise the number of data structures to be sent up to a point
Hi Hicham,
I'm afraid that I was wrong per last email. The trunk doesn't have this
problem, it's only for 1.4 branch. I'll make a ticket to fix it. Thanks
a lot.
Regards,
Shiqing
On 2010-12-1 11:16 PM, Hicham Mouline wrote:
-Original Message-
From: Shiqing Fan [mailto:f...@hlrs.de]
Hi Hicham,
Yes, all the stuff you expected are already in trunk, all type of build
share the same bin, include, lib and share. you can check out and have a
test.
Regards,
Shiqing
On 2010-12-1 11:25 PM, Hicham Mouline wrote:
Hi,
Following the instructions from Readme.windows, I've used cmak
Hi,
When I use openmpi mpirun --mca btl , I find arg1 can be ofud, self,
sm, openib, but www.open-mpi.org desn't explain those args. I can't understand
the mean of "ofud", what different between "ofud" and "openib",
I also can't understand the different between "ibcm" and "rdmacm", when I use
11 matches
Mail list logo