Vaz, Guilherme wrote:
Gus,
Thanks for your email. Some more explanation then:
1) We have made this estimation of memory already in the past.
My code takes for n*Mcells => 2.5n*GBRam. So for 1.2MCells we need 3GB Ram.
The problem occurs in one PC with 12GB Ram and 4 cores. So 12GB Ram is enoug
Jeff Squyres wrote:
On Dec 16, 2010, at 5:14 AM, Mathieu Gontier wrote:
We have lead some tests and the option btl_sm_eager_limit has a positive consequence on the performance. Eugene, thank you for your links.
Good!
Just be aware of the tradeoff you're making: space for
Gus,
Thanks for your email. Some more explanation then:
1) We have made this estimation of memory already in the past. My code takes
for n*Mcells => 2.5n*GBRam. So for 1.2MCells we need 3GB Ram. The problem
occurs in one PC with 12GB Ram and 4 cores. So 12GB Ram is enough. So far (and
in the o
Vaz, Guilherme wrote:
Ok, ok. It is indeed a CFD program, and Gus got it right. Number of cells per
core means memory per core (sorry for the inaccuracy).
My PC has 12GB of RAM.
Can you do one of those typical engineering calculations, a back of the
envelope estimate of how much memory your p
On 12/16/2010 08:34 AM, Jeff Squyres wrote:
> Additionally, since MPI-3 is updating the semantics of the one-sided
> stuff, it might be worth waiting for all those clarifications before
> venturing into the MPI one-sided realm. One-sided semantics are much
> more subtle and complex than two-sided
Open MPI uses RDMA under the covers for send/receive when it makes sense. See
these FAQ entries for more details:
http://www.open-mpi.org/faq/?category=openfabrics#large-message-tuning-1.2
http://www.open-mpi.org/faq/?category=openfabrics#large-message-tuning-1.3
http://www.open-mpi.
I found a presentation on the web that showed significant performance
benefits for the one-sided communication, I presumed it was from hardware
RDMA support that the one-sided calls could take advantage of. But I gather
from the your question that is not necessarily the case. Are you aware of
cas
On Dec 16, 2010, at 5:14 AM, Mathieu Gontier wrote:
> We have lead some tests and the option btl_sm_eager_limit has a positive
> consequence on the performance. Eugene, thank you for your links.
Good!
Just be aware of the tradeoff you're making: space for time.
> Now, to offer a good support t
On Dec 16, 2010, at 3:29 AM, Gilbert Grosdidier wrote:
>> Does this problem *always* happen, or does it only happen once in a great
>> while?
>>
> gg= No, this problem happens rather often, almost every other time.
> Seems to happen more often as the number of cores increases.
Well that's a bum
Hmm. I thought we had squashed all VT OMP issues. Bummer.
Can you send all the information located here:
http://www.open-mpi.org/community/help/
If you're not using VampirTrace, you can disable VampirTrace with --disable-vt.
On Dec 16, 2010, at 6:14 AM, Bernard Secher - SFME/LGLS wrote:
Have you run your application through a debugger, or examined the corefiles to
see where exactly the segv is occurring? That may shed some insight into what
the exact problem is.
On Dec 16, 2010, at 4:20 AM, Vaz, Guilherme wrote:
> Ok, ok. It is indeed a CFD program, and Gus got it right. Num
Thanks Jody,
Is it possible to install openmpi without openmp ? Is there any option
in configure for that ?
Bernard
jody a écrit :
Hi
if i rememmber correctly, "omp.h" is a header file for OpenMP which is
not the same as Open MPI.
So it looks like you have to install OpenMP,
Then you can co
Hi
if i rememmber correctly, "omp.h" is a header file for OpenMP which is
not the same as Open MPI.
So it looks like you have to install OpenMP,
Then you can compile it with the compiler option -fopenmp (in gcc)
Jody
On Thu, Dec 16, 2010 at 11:56 AM, Bernard Secher - SFME/LGLS
wrote:
> I get t
I get the following error message when I compile openmpi V1.5.1:
CXXotfprofile-otfprofile.o
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:11:18:
erreur: omp.h : Aucun fichier ou dossier de ce type
../../../../../../../../../openmp
Does the env. var. works to overload it:
export OMPI_MCA_btl_sm_eager_limit=40960
In that case, I can deal with it.
On 12/16/2010 11:14 AM, Mathieu Gontier wrote:
Hi all,
We have lead some tests and the option btl_sm_eager_limit has a
positive consequence on the performance. Eugene, thank you
Hi all,
We have lead some tests and the option btl_sm_eager_limit has a positive
consequence on the performance. Eugene, thank you for your links.
Now, to offer a good support to our users, we would like to get the
value of this parameters at the runtime. I am aware I can have the value
runn
Ok, ok. It is indeed a CFD program, and Gus got it right. Number of cells per
core means memory per core (sorry for the inaccuracy).
My PC has 12GB of RAM. And the same calculation runs fine in an old Ubuntu8.04
32bits with 4GB RAM.
What I find strange is that the same problems runs with 1 core (
Bonjour Jeff,
Le 16/12/2010 01:40, Jeff Squyres a écrit :
On Dec 15, 2010, at 3:24 PM, Ralph Castain wrote:
I am not using the TCP BTL, only OPENIB one. Does this change the number of
sockets in use per node, please ?
I believe the openib btl opens sockets for connection purposes, so the cou
18 matches
Mail list logo