This is due to the way the OMPI finds and loads modules. What actually
happens is that OMPI looks for *all* modules of a given type and
dlopen's them. It then applies the filter of which components are
desired and dlclose's all the undesired ones. It certainly would be
better to apply the filter
Ok, good -- that's what I was assuming had happened.
As Brian said, the stdin issue is not currently on our roadmap to fix in
the immediate future. But I added it to our bug tracker as an
un-milestoned issue so that we don't forget about it
https://svn.open-mpi.org/trac/ompi/ticket/167
There was a bug in early Torque 2.1.x versions (I'm afraid I don't
remember which one) that -- I think -- had something to do with a faulty
poll() implementation. Whatever the problem was, it caused all TM
launchers to fail on OSX.
Can you see if the Torque-included tool pbsdsh works properly?
Jeff
Thanks for the reply; I realize you guys must be really busy with the recent
release of openmpi. I tried 1.1 and I don't get error messages any more. But
the code now hangs; no error or exit. So I am not sure if this is the same
issue or something else. I am enclosing my source code. I compil
Greetings,
The bug with poll was fixed in the stable Torque 2.1.1 release, and I have
checked again
to make sure that pbsdsh does work.
jbronder@meldrew-linux ~/src/hpl $ qsub -I -q default -l nodes=4:ppn=2 -l
opsys=darwin
qsub: waiting for job 312.ldap1.meldrew.clusters.umaine.edu to start
qsub
Jeff,
Thanks for the reply and your attention to this.
Can you -- and anyone else in
similar circumstances -- let me know how common this scenario is?
I think this depends on the environment. For us and many other ISVs, it
is very common. The build host is almost always physically differen
On Jun 29, 2006, at 11:16 PM, Graham E Fagg wrote:
On Thu, 29 Jun 2006, Doug Gregor wrote:
When I use algorithm 6, I get:
[odin003.cs.indiana.edu:14174] *** An error occurred in MPI_Bcast
[odin005.cs.indiana.edu:10510] *** An error occurred in MPI_Bcast
Broadcasting integers from root 0...[od
Hello,
I had encountered a bug in Open MPI 1.0.1 using indexed datatypes
with MPI_Recv (which seems to be of the "off by one" sort), which
was corrected in Open MPI 1.0.2.
It seems to have resurfaced in Open MPI 1.1 (I encountered it using
different data and did not recognize it immediately, but
On Jun 29, 2006, at 5:23 PM, Tom Rosmond wrote:
I am testing the one-sided message passing (mpi_put, mpi_get) that
is now supported in the 1.1 release. It seems to work OK for some
simple test codes, but when I run my big application, it fails.
This application is a large weather model th