For the web archives...
Brock and I talked about this in person at SC. The conversation was much more
involved than this seemingly-simple question implied. :-)
The short version is:
- numactl does both memory and processor binding
- hwloc is the new numactl :-)
- e.g., see the hwloc-bind(1)
Hi,
I have placed the source in \Program Files\openmpi-1.5.4
the build dir in \Program Files\openmpi.build
and the install dir in \Program Files\openmpi
I could not find config.log in any of the 3 directories nor in the directory
from which I run mpirun.
The build log attached is a zip of all th
On Mon, 21 Nov 2011, Mudassar Majeed wrote:
Thank you for your answer. Actually, I used the term UDP to show the
non-connection oriented messaging. TCP creates connection between two parties
(who
communicate) but in UDP a message can be sent to any IP/port where a
process/thread is listening
Thank you for your answer. Actually, I used the term UDP to show the
non-connection oriented messaging. TCP creates connection between two parties
(who communicate) but in UDP a message can be sent to any IP/port where a
process/thread is listening to, and if the process is busy in doing somethi
On 11/21/2011 5:43 AM, Shiqing Fan wrote:
Hi John,
Yes, there will be an initial build support for MinGW, but a few
runtime issues still need to be fixed.
If you want to try the current one, please download one of the latest
1.5 nightly tarballs. Please just let me know if you got problems o
MPI defines only reliable communications -- it's not quite the same thing as
UDP.
Hence, if you send something, it is guaranteed to be able to be received. UDP
may drop packets whenever it feels like it (e.g., when it is out of resources).
Most MPI implementations will do some form of buffer
No real ideas, I'm afraid. We regularly launch much larger jobs than that using
ssh without problem, so it is likely something about the local setup of that
node that is causing the problem. Offhand, it sounds like either the mapper
isn't getting things right, or for some reason the daemon on 00
Hello Open MPI volks,
We use OpenMPI 1.5.3 on our pretty new 1800+ nodes InfiniBand cluster,
and we have some strange hangups if starting OpenMPI processes.
The nodes are named linuxbsc001,linuxbsc002,... (with some lacuna due of
offline nodes). Each node is accessible from each other over S
Hi,
Could you please send your config and build log to me? Have you tried
with a simpler program? Does this error always happen?
Regards,
Shiqing
On 2011-11-19 4:24 PM, MM wrote:
Trying to run my program linked against debug 1.5.4 on vs2010 fails:
mpirun -np 1 .\nhui\Debug\nhui.exe : -np
Hi John,
Yes, there will be an initial build support for MinGW, but a few runtime
issues still need to be fixed.
If you want to try the current one, please download one of the latest
1.5 nightly tarballs. Please just let me know if you got problems on
that. Feedback would be helpful and appr
10 matches
Mail list logo