On Jan 29, 2014, at 7:56 PM, Victor wrote:
> Thanks for the insights Tim. I was aware that the CPUs will choke beyond a
> certain point. From memory on my machine this happens with 5 concurrent MPI
> jobs with that benchmark that I am using.
>
> My primary question was about scaling between t
Thanks for the insights Tim. I was aware that the CPUs will choke beyond a
certain point. From memory on my machine this happens with 5 concurrent MPI
jobs with that benchmark that I am using.
My primary question was about scaling between the nodes. I was not getting
close to double the performanc
On Jan 29, 2014, at 12:35 PM, Reuti wrote:
>> I don't know the difference between pgc++ and pgcpp, unfortunately.
>
> It's a matter of the ABI:
>
> http://www.pgroup.com/lit/articles/insider/v4n1a2.htm
>
> pgc++ uses the new ABI.
Must be more than that -- this is a compile issue, not a link
Am 29.01.2014 um 18:24 schrieb Jeff Squyres (jsquyres):
> Oh, I'm sorry -- I mis-read your initial mail (I thought when you did use all
> the PGI compilers, it worked).
>
> I don't know the difference between pgc++ and pgcpp, unfortunately.
It's a matter of the ABI:
http://www.pgroup.com/lit/a
Oh, I'm sorry -- I mis-read your initial mail (I thought when you did use all
the PGI compilers, it worked).
I don't know the difference between pgc++ and pgcpp, unfortunately.
Do you have the latest version of your PGI compiler suite in that series?
On Jan 29, 2014, at 12:10 PM, Jiri Kraus w
Hi Jeff,
thanks for taking a look. I don't want to mix compiler tool chains. I have just
double checked my configure line and I am passing
CXX=pgc++ CC=pgcc FC=pgfortran F77=pgfortran ...
so there are only PGI compilers used.
Thanks
Jiri
> Date: Wed, 29 Jan 2014 16:24:08 +
> From
That sounds about right.
What's happening is that OMPI has learned a bunch about the C compiler before
it does this C++ link test. In your first case (which is assumedly with gcc),
it determines that it needs _GNU_SOURCE set -- or some other test has caused
that to be set. Then it uses that w
Hi,
I am trying to compile OpenMPI 1.7.3 with pgc++ (14.1) as C++ compiler. During
configure it fails with
checking if C and C++ are link compatible... no
The error from config.log is:
configure:18205: checking if C and C++ are link compatible
configure:18230: pgcc -c -DNDEBUG -fast conftest_
On 1/29/2014 8:02 AM, Reuti wrote:
Quoting Victor :
Thanks for the reply Reuti,
There are two machines: Node1 with 12 physical cores (dual 6 core
Xeon) and
Do you have this CPU?
http://ark.intel.com/de/products/37109/Intel-Xeon-Processor-X5560-8M-Cache-2_80-GHz-6_40-GTs-Intel-QPI
-- R
Sorry typo. I have dual X5660 not X5560.
http://ark.intel.com/products/47921/Intel-Xeon-Processor-X5660-12M-Cache-2_80-GHz-6_40-GTs-Intel-QPI?q=x5660
On 29 January 2014 21:02, Reuti wrote:
> Quoting Victor :
>
> Thanks for the reply Reuti,
>>
>> There are two machines: Node1 with 12 physical c
Quoting Victor :
Thanks for the reply Reuti,
There are two machines: Node1 with 12 physical cores (dual 6 core Xeon) and
Do you have this CPU?
http://ark.intel.com/de/products/37109/Intel-Xeon-Processor-X5560-8M-Cache-2_80-GHz-6_40-GTs-Intel-QPI
-- Reuti
Node2 with 4 physical cores (i5-2
Thanks for the reply Reuti,
There are two machines: Node1 with 12 physical cores (dual 6 core Xeon) and
Node2 with 4 physical cores (i5-2400).
Regarding scaling on the single 12 core node, not it is also not linear. In
fact it is downright strange. I do not remember the numbers right now but
10 j
Am 29.01.2014 um 03:00 schrieb Victor:
> I am running a CFD simulation benchmark cavity3d available within
> http://www.palabos.org/images/palabos_releases/palabos-v1.4r1.tgz
>
> It is a parallel friendly Lattice Botlzmann solver library.
>
> Palabos provides benchmark results for the cavity3d
13 matches
Mail list logo