When building a 1.3.4 RPM on Fedora 11, I get the following error about a
unreadable compressed PostScript file:
error: Recognition of file
"/home/build/rpmbuild/BUILDROOT/openmpi-1.3.4-1.x86_64/opt/build/share/vampirtrace/doc/opari/lacsi01.ps.gz"
failed: mode 100644 zlib: invalid distance too
Yowza -- I wonder how that happened!
I'll forward this on to the VT guys to fix. Thanks for the heads up...
On Dec 4, 2009, at 7:44 AM, wrote:
> When building a 1.3.4 RPM on Fedora 11, I get the following error about a
> unreadable compressed PostScript file:
>
> error: Recognition of file
Hello list,
when I run the attached example, which spawns a "slave" process with
MPI_Comm_spawn(), I see the following:
nbock19911 0.0 0.0 53980 2288 pts/0S+ 07:42 0:00
/usr/local/openmpi-1.3.4-gcc-4.4.2/bin/mpirun -np 3 ./master
nbock19912 92.1 0.0 158964 3868 pts/0R+
On Dec 4, 2009, at 7:46 AM, Nicolas Bock wrote:
> Hello list,
>
> when I run the attached example, which spawns a "slave" process with
> MPI_Comm_spawn(), I see the following:
>
> nbock19911 0.0 0.0 53980 2288 pts/0S+ 07:42 0:00
> /usr/local/openmpi-1.3.4-gcc-4.4.2/bin/mpirun
On Fri, Dec 4, 2009 at 08:03, Ralph Castain wrote:
>
>
> It is polling at the barrier. This is done aggressively by default for
> performance. You can tell it to be less aggressive if you want via the
> yield_when_idle mca param.
>
>
How do I use this parameter correctly? I tried
/usr/local/open
You used it correctly. Remember, all that cpu number is telling you is the
percentage of use by that process. So bottom line is: we are releasing it as
much as we possibly can, but no other process wants to use the cpu, so we go
ahead and use it.
If any other process wanted it, then the percent
On Fri, Dec 4, 2009 at 08:21, Ralph Castain wrote:
> You used it correctly. Remember, all that cpu number is telling you is the
> percentage of use by that process. So bottom line is: we are releasing it as
> much as we possibly can, but no other process wants to use the cpu, so we go
> ahead and
We are using 1.3.3. I found the linker flag -Bsymbolic, and applying it to the
plugin appears to have fixed the issue. The problem is a result of not
properly structuring the plugin's code and having duplicate symbol names, the
behavior of Open MPI just brought them to the light.
Thanks for y
Nicolas Bock wrote:
On Fri, Dec 4, 2009 at 08:21, Ralph Castain
wrote:
You used it correctly. Remember, all that cpu number
is telling you is the percentage of use by that process. So bottom line
is: we are releasing it as much as we possibly can, but no other
proc
On Fri, Dec 4, 2009 at 10:10, Eugene Loh wrote:
> Nicolas Bock wrote:
>
> On Fri, Dec 4, 2009 at 08:21, Ralph Castain wrote:
>
>> You used it correctly. Remember, all that cpu number is telling you is the
>> percentage of use by that process. So bottom line is: we are releasing it as
>> much as
Nicolas Bock wrote:
On Fri, Dec 4, 2009 at 10:10, Eugene Loh
wrote:
Yield helped, but
not as effectively as one might have imagined.
Yes, that's the impression I get as well, the master process might be
yielding, but it doesn't appear to be a lot. Maybe
On Fri, Dec 4, 2009 at 10:29, Eugene Loh wrote:
> Nicolas Bock wrote:
>
> On Fri, Dec 4, 2009 at 10:10, Eugene Loh wrote:
>
>> Yield helped, but not as effectively as one might have imagined.
>>
>
> Yes, that's the impression I get as well, the master process might be
> yielding, but it doesn't
Open MPI is installed by the distro with headers in /usr/include
$ mpif90 -showme:compile -I/some/special/path
-I/usr/include -pthread -I/usr/lib/openmpi -I/some/special/path
Here's why it's a problem:
HDF5 is also installed in /usr with modules at /usr/include/h5*.mod. A
new HDF5 cannot be
Excellent!
Once you get some more definitive results, could you send this in patch form?
On Dec 3, 2009, at 7:05 PM,
wrote:
> >> I have actually already taken the IPv6 block and simply tried to
> >> replace any IPv6 stuff with IPv4 "equivalents", eg:
> >
> > At the risk of showing a lot of i
On Dec 3, 2009, at 3:31 AM, Katz, Jacob wrote:
> I wonder if there is a BKM (efficient and portable) to mimic a timeout with a
> call to MPI_Wait, i.e. to interrupt it once a given time period has passed if
> it hasn’t returned by then yet.
Pardon my ignorance, but what does BKM stand for?
Ope
Nicolas Bock wrote:
On Fri, Dec 4, 2009 at 10:29, Eugene Loh
wrote:
I think you might observe a
world of difference if the master issued
some non-blocking call and then intermixed MPI_Test calls with sleep
calls. You should see *much* more subservient behavior.
In the mpirun man page in Open MPI 1.3.4, in the "Current Working
Directory" section, there is the following sentence:
"""
If they are unable (e.g., if the directory does not exit on that
node), then Open MPI will use the default directory determined by the
starter.
"""
I believe "exit"
If you are hoping for a return on timeout, almost zero CPU use while
waiting and fast response you will need to be pretty creative. Here is a
simple solution that may be OK if you do not need both fast response and
low CPU load.
flag = false;
for ( ; ! is_time_up(); )
MPI_Test( &flag,
On Fri, Dec 4, 2009 at 12:10, Eugene Loh wrote:
> Nicolas Bock wrote:
>
> On Fri, Dec 4, 2009 at 10:29, Eugene Loh wrote:
>
>> I think you might observe a world of difference if the master issued some
>> non-blocking call and then intermixed MPI_Test calls with sleep calls. You
>> should see *
Jeff Squyres wrote:
On Dec 3, 2009, at 3:31 AM, Katz, Jacob wrote:
I wonder if there is a BKM (efficient and portable) to mimic a timeout with a
call to MPI_Wait, i.e. to interrupt it once a given time period has passed if
it hasn’t returned by then yet.
Pardon my ignorance, but what does B
Fixed -- thanks!
On Dec 4, 2009, at 2:26 PM, Jeremiah Willcock wrote:
> In the mpirun man page in Open MPI 1.3.4, in the "Current Working
> Directory" section, there is the following sentence:
>
> """
> If they are unable (e.g., if the directory does not exit on that
> node), then Open MPI
Oy -- more specifically, we should not be putting -I/usr/include on the command
line *at all* (because it's special and already included by the compiler search
paths; similar for /usr/lib and /usr/lib64). We should have some special case
code that looks for /usr/include and simply drops it. Le
On Fri, 4 Dec 2009 16:20:23 -0500, Jeff Squyres wrote:
> Oy -- more specifically, we should not be putting -I/usr/include on
> the command line *at all* (because it's special and already included
> by the compiler search paths; similar for /usr/lib and /usr/lib64).
If I remember correctly, the is
Thank you so much! It is a synchronization issue. In my case, one node
actually run slower than the other node. Adding MPE_Barrier() helps to
straight things out.
Thank you for your help!
Eugene Loh wrote:
Your processes are probably running asynchronously. You could perhaps
try tracing prog
Hello list,
in our code we use a very short front-end program to drive a larger set of
codes that do our calculations. Right in the beginning of the front-end, we
have an if() statement such that only the rank 0 front-end does something,
and the other ranks go right away to an MPI_Barrier() statem
On Dec 4, 2009, at 6:54 PM, Nicolas Bock wrote:
> in our code we use a very short front-end program to drive a larger set of
> codes that do our calculations. Right in the beginning of the front-end, we
> have an if() statement such that only the rank 0 front-end does something,
> and the other
Hi Jeff,
thanks for the explanation. Yes, some of the earlier discussion where in
fact very useful. In general I found this list to be very helpful, my thanks
to everyone here who is helping people like me out.
The suggestion to use messages and non-blocking receives with MPI_Test()
proved just w
27 matches
Mail list logo