You might also add the —display-allocation flag to mpirun so we can see what it
thinks the allocation looks like. If there are only 16 slots on the node, it
seems odd that OMPI would assign 32 procs to it unless it thinks there is only
1 node in the job, and oversubscription is allowed (which it
Hi,
Am 09.11.2014 um 05:38 schrieb Ralph Castain:
> FWIW: during MPI_Init, each process “publishes” all of its interfaces. Each
> process receives a complete map of that info for every process in the job. So
> when the TCP btl sets itself up, it attempts to connect across -all- the
> interface
Am 10.11.2014 um 12:24 schrieb Reuti:
> Hi,
>
> Am 09.11.2014 um 05:38 schrieb Ralph Castain:
>
>> FWIW: during MPI_Init, each process “publishes” all of its interfaces. Each
>> process receives a complete map of that info for every process in the job.
>> So when the TCP btl sets itself up, it
I am sorry for the delay; I've been caught up in SC deadlines. :-(
I don't see anything blatantly wrong in this output.
Two things:
1. Can you try a nightly v1.8.4 snapshot tarball? This will check to see if
whatever the bug is has been fixed for the upcoming release:
http://www.open-mpi
Wow, that's pretty terrible! :(
Is the behavior BTL-specific, perchance? E.G., if you only use certain BTLs,
does the delay disappear?
FWIW: the use-all-IP interfaces approach has been in OMPI forever.
Sent from my phone. No type good.
> On Nov 10, 2014, at 6:42 AM, Reuti wrote:
>
>> Am
Am 10.11.2014 um 12:50 schrieb Jeff Squyres (jsquyres):
> Wow, that's pretty terrible! :(
>
> Is the behavior BTL-specific, perchance? E.G., if you only use certain BTLs,
> does the delay disappear?
You mean something like:
reuti@annemarie:~> date; mpiexec -mca btl self,tcp -n 4 --hostfile m
"Jeff Squyres (jsquyres)" writes:
> There were several commits; this was the first one:
>
> https://github.com/open-mpi/ompi/commit/d7eaca83fac0d9783d40cac17e71c2b090437a8c
I don't have time to follow this properly, but am I reading right that
that says mpi_sizeof will now _not_ work with gcc <
Hello again,
I have a piece of code, which worked fine on my PC, but on my notebook
MPI_Wtime and MPI_Wtick won't work with the -mno-sse flag specified.
MPI_Wtick will return 0 instead of 1e-6 and MPI_Wtime will also return
always 0. clock() works in all cases.
The Code is:
#include
#inclu
On some platforms, the MPI_Wtime function essentially uses gettimeofday() under
the covers.
See this stackoverflow question about -mno-sse:
http://stackoverflow.com/questions/3687845/error-with-mno-sse-flag-and-gettimeofday-in-c
On Nov 10, 2014, at 8:35 AM, maxinator333 wrote:
> Hello
Hello,
use RDTSC (or RDTSCP) to read TSC directly
Kind regards,
Alex Granovsky
-Original Message-
From: maxinator333
Sent: Monday, November 10, 2014 4:35 PM
To: us...@open-mpi.org
Subject: [OMPI users] MPI_Wtime not working with -mno-sse flag
Hello again,
I have a piece of code, w
Hi,
I'm stumbling on a problem related to the openib btl in
openmpi-1.[78].*, and the (I think legitimate) use of file-backed
mmaped areas for receiving data through MPI collective calls.
A test case is attached. I've tried to make is reasonably small,
although I recognize that it's not extra thi
Thank you Jeff, I'll try this and let you know.
Saliya
On Nov 10, 2014 6:42 AM, "Jeff Squyres (jsquyres)"
wrote:
> I am sorry for the delay; I've been caught up in SC deadlines. :-(
>
> I don't see anything blatantly wrong in this output.
>
> Two things:
>
> 1. Can you try a nightly v1.8.4 sna
Just really quick off the top of my head, mmaping relies on the virtual
memory subsystem, whereas IB RDMA operations rely on physical memory being
pinned (unswappable.) For a large message transfer, the OpenIB BTL will
register the user buffer, which will pin the pages and make them
unswappable. If
That is indeed bizarre - we haven’t heard of anything similar from other users.
What is your network configuration? If you use oob_tcp_if_include or exclude,
can you resolve the problem?
> On Nov 10, 2014, at 4:50 AM, Reuti wrote:
>
> Am 10.11.2014 um 12:50 schrieb Jeff Squyres (jsquyres):
>
On Nov 10, 2014, at 8:27 AM, Dave Love wrote:
>> https://github.com/open-mpi/ompi/commit/d7eaca83fac0d9783d40cac17e71c2b090437a8c
>
> I don't have time to follow this properly, but am I reading right that
> that says mpi_sizeof will now _not_ work with gcc < 4.9, i.e. the system
> compiler of th
Thanks for your answer.
On Mon, Nov 10, 2014 at 4:31 PM, Joshua Ladd wrote:
> Just really quick off the top of my head, mmaping relies on the virtual
> memory subsystem, whereas IB RDMA operations rely on physical memory being
> pinned (unswappable.)
Yes. Does that mean that the result of comput
Hi,
Am 10.11.2014 um 16:39 schrieb Ralph Castain:
> That is indeed bizarre - we haven’t heard of anything similar from other
> users. What is your network configuration? If you use oob_tcp_if_include or
> exclude, can you resolve the problem?
Thx - this option helped to get it working.
These
Hi,
IIRC there were some bug fixes between 1.8.1 and 1.8.2 in order to really
use all the published interfaces.
by any change, are you running a firewall on your head node ?
one possible explanation is the compute node tries to access the public
interface of the head node, and packets get dropped
I am implementing a hub/servers MPI application. Each of the servers can
get tied up waiting for some data, then they do an MPI Send to the hub.
It is relatively simple for me to have the hub waiting around doing a
Recv from ANY_SOURCE. The hub can get busy working with the data. What
I'm worri
Another thing you can do is (a) ensure you built with —enable-debug, and then
(b) run it with -mca oob_base_verbose 100 (without the tcp_if_include option)
so we can watch the connection handshake and see what it is doing. The
—hetero-nodes will have not affect here and can be ignored.
Ralph
20 matches
Mail list logo