On 7/12/2011 11:06 PM, Mohan, Ashwin wrote:
Tim,
Thanks for your message. I was however not clear about your suggestions. Would
appreciate if you could clarify.
You say," So, if you want a sane comparison but aren't willing to study the compiler
manuals, you might use (if your source code doe
"top", I notice that all 4 slots are active. I
noticed this when I did "top" with the Intel machine too, that is, it showed
four slots active.
Thank you..ashwin.
-Original Message-
From: users-boun...@open-mpi.org on behalf of Tim Prince
Sent: Tue 7/12/201
On 7/12/2011 7:45 PM, Mohan, Ashwin wrote:
Hi,
I noticed that the exact same code took 50% more time to run on OpenMPI
than Intel. I use the following syntax to compile and run:
Intel MPI Compiler: (Redhat Fedora Core release 3 (Heidelberg), Kernel
version: Linux 2.6.9-1.667smp x86_64**
On 7/12/2011 4:45 PM, Mohan, Ashwin wrote:
I noticed that the exact same code took 50% more time to run on OpenMPI
than Intel.
It would be good to know if that extra time is spent inside MPI calls or
not. There is a discussion of how you might do this here:
http://www.open-mpi.org/faq/?catego
Hi,
I noticed that the exact same code took 50% more time to run on OpenMPI
than Intel. I use the following syntax to compile and run:
Intel MPI Compiler: (Redhat Fedora Core release 3 (Heidelberg), Kernel
version: Linux 2.6.9-1.667smp x86_64**
mpiicpc -o .cpp -lmpi
OpenMPI 1.4.3: (