You can easily map on a blade an application to run
on CPU 0 (resp 1) while using memory banks relevant to CPU 0 (resp 1) with:
numactl --cpubind=0 --membind=0 app ...
(resp numactl --cpubind=1 --membind=1 app ...)
Hope this helps, Gilbert.
On Mon, 27 Oct 2008, Lenny Verkhovsky wrote
Yes it is: REAL(kind=16) = REAL*16 = 16 byte REAL in fortran, or a
long double in C that is why I thought MPI_REAL16 should work.
On Mon, 27 Oct 2008, Jeff Squyres wrote:
I dabble in Fortran but am not an expert -- is REAL(kind=16) the same as
REAL*16? MPI_REAL16 should be a 16 byte REAL; I'm
Thanks for your suggestions.
I tried them all (declaring my variables as REAL*16 or REAL(16)) to no
avail. I still get the wrong answer with my call to MPI_ALLREDUCE.
I think the KINDs are compiler dependent. For Sun Studio Fortran, REAL*16
and REAL(16) are the same thing. For Intel, maybe i
Sorry, forgot to mention that running your sample program with ifort
produces the expected result:
8 16 16 16
Thanks for your suggestions.
I tried them all (declaring my variables as REAL*16 or REAL(16)) to no avail.
I still get the wrong answer with my call to MPI_ALLREDUCE.
I think the
I assume you've confirmed that point to point communication works
happily with quad prec on your machine? How about one-way reductions?
On Tue, 2008-10-28 at 08:47 +, Julien Devriendt wrote:
> Thanks for your suggestions.
> I tried them all (declaring my variables as REAL*16 or REAL(16)) to
Yes point to point communication is OK with quad prec. and one-way
reductions as well. I also tried my sample code on another platform
(which sports AMD opterons instead of Intel CPUs) with the same compilers,
and get the same *wrong* results with the call to MPI_ALLREDUCE in quad
prec, so it
It is complaining about a missing file. This is a file from the Open
MPI distribution, I wonder how it can be missing. Can you verify that
the file opal/mca/timer/windows/timer_windows_component.h is there ?
Thanks,
george.
On Oct 27, 2008, at 4:52 PM, Jeff Squyres wrote:
Sorry for
Dear OpenMPi developers,
i'm developing parallel C++ application under OpenMPI 1.2.5. At the
moment, i'm using MPI Exception Handlers, but some processors returns
the error below:
"MPI 2 C++ exception throwing is disabled, MPI::mpi_errno has the error code"
Why this, and why only in some nodes
On Tue, Oct 28, 2008 at 9:06 AM, George Bosilca wrote:
> It is complaining about a missing file. This is a file from the Open MPI
> distribution, I wonder how it can be missing. Can you verify that the file
> opal/mca/timer/windows/timer_windows_component.h is there ?
No, it's not. But I see a
opa
Your question is quite timely -- we had a long discussion about C++
exceptions just last week at the MPI Forum... :-)
OMPI disables MPI throwing exceptions by default because it can cause
a [slight] performance penalty in some compilers. You can enable it
by adding --enable-cxx-exceptions
Very clear reply,
thanks Jeff :)
2008/10/28 Jeff Squyres :
> Your question is quite timely -- we had a long discussion about C++
> exceptions just last week at the MPI Forum... :-)
>
> OMPI disables MPI throwing exceptions by default because it can cause a
> [slight] performance penalty in some c
Jeff,
another question: how can i check if MPI Exceptions are enabled?
2008/10/28 Jeff Squyres :
> Your question is quite timely -- we had a long discussion about C++
> exceptions just last week at the MPI Forum... :-)
>
> OMPI disables MPI throwing exceptions by default because it can cause a
>
On Oct 28, 2008, at 11:19 AM, Gabriele Fatigati wrote:
another question: how can i check if MPI Exceptions are enabled?
ompi_info | grep exceptions
Should tell ya.
--
Jeff Squyres
Cisco Systems
Something odd is definitely going on here. I'm able to replicate your
problem with the intel compiler suite, but I can't quite figure out
why -- it all works properly if I convert the app to C (and still use
the MPI_REAL16 datatype with long double data).
George and I are investigating; I'
14 matches
Mail list logo