Hi,
Am 01.02.2014 um 15:10 schrieb Jiri Kraus:
> sorry but I don't know the details of the issue. But although the error is
> reported as pgc++ not being link compatible to pgcc by OpenMPI configure the
> error in the config.log is a complier error. So I don't think that this is a
> linking is
See Section 5.9.5 of MPI-3 or the section named "User-Defined
Reduction Operations" but presumably numbered differently in older
copies of the MPI standard.
An older but still relevant online reference is
http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report/node107.htm
There is a proposal to suppor
On 02/01/2014 12:42 PM, Patrick Boehl wrote:
Hi all,
I have a question on datatypes in openmpi:
Is there an (easy?) way to use __float128 variables with openmpi?
Specifically, functions like
MPI_Allreduce
seem to give weird results with __float128.
Essentially all I found was
http://beige
On 02/01/2014 12:42 PM, Patrick Boehl wrote:
Hi all,
I have a question on datatypes in openmpi:
Is there an (easy?) way to use __float128 variables with openmpi?
Specifically, functions like
MPI_Allreduce
seem to give weird results with __float128.
Essentially all I found was
http://beige
Hi all,
I have a question on datatypes in openmpi:
Is there an (easy?) way to use __float128 variables with openmpi?
Specifically, functions like
MPI_Allreduce
seem to give weird results with __float128.
Essentially all I found was
http://beige.ucs.indiana.edu/I590/node100.html
where they
Thanks for the reply Jeff. This is directional.
On 01-Feb-2014 7:51 am, "Jeff Squyres (jsquyres)"
wrote:
> On Jan 31, 2014, at 2:49 AM, Siddhartha Jana
> wrote:
>
> > Sorry for the typo:
> > ** I was hoping to understand the impact of OpenMPI's implementation of
> these protocols using traditio
Thanks!
I noted your comment on the ticket so that it doesn't get lost. I haven't had
a chance to look into this yet because we've been focusing on getting 1.7.4 out
the door, and this has been identified as a 1.7.5 fix.
On Jan 31, 2014, at 3:31 PM, Åke Sandgren wrote:
> On 01/28/2014 08:26
Hi Reuti,
sorry but I don't know the details of the issue. But although the error is
reported as pgc++ not being link compatible to pgcc by OpenMPI configure the
error in the config.log is a complier error. So I don't think that this is a
linking issue.
> When I get it right, it should be a fe
Thank you all for your help. --bind-to-core increased the cluster
performance by approximately 10%, so in addition to the improvements
through the implementation of Open-MX, the performance now scales within
expectations - not linear, but much better than with the original setup.
On 30 January 20
On Jan 31, 2014, at 2:49 AM, Siddhartha Jana wrote:
> Sorry for the typo:
> ** I was hoping to understand the impact of OpenMPI's implementation of
> these protocols using traditional TCP.
>
> This is the paper I was referring to:
> Woodall, et al., "High Performance RDMA Protocols in HPC".
>
Sorry for the massive delay in replying; I'm going through my inbox this
morning and finding old mails that I initially missed. :-\
More below.
On Jan 17, 2014, at 8:45 AM, Julien Bodart wrote:
> version: 1.6.5 (compiled with Intel compilers)
>
> command used:
> mpirun --machinefile mfile -
11 matches
Mail list logo