On Oct 9, 2007, at 3:50 PM, Dirk Eddelbuettel wrote:
edd@ron:~$ orterun -n 2 --mca mca_component_show_load_errors 1 r -e
'library(Rmpi); print(mpi.comm.rank(0))'
[ron:18360] mca: base: component_find: unable to open osc pt2pt:
file not found (ignored)
[ron:18361] mca: base: component_find: un
If you do not have IB hardware, you might want to permanently disable
the IB support. You can do this by setting an MCA parameter or
simply removing the $prefix/lib/openmpi/mca_btl_openib.* files. This
will suppress the warning that you're seeing.
As for your problem with MPI_SEND, do you
Our Mac expert (Brian Barrett) just recently left the project for
greener pastures. He's the guy who typically answered Mac/XGrid
questions -- I'm afraid that I have no idea how any of that XGrid
stuff works... :-(
Is there anyone else around who can answer XGrid questions? Warner?
On
For anyone following this thread. I am following up with Hiep
offline. I'll reply back to the list once the issue is resolved.
-- Josh
On Oct 3, 2007, at 11:11 AM, Hiep Bui Hoang wrote:
Hi,
I had found that the problem is the firewall on one of my
computers. When I set firewall allow to co
Hi:
I've added:
btl = ^openib
to /etc/openmpi-mca-params.conf on the head node, but this doesn't
seem to help. Does this need to be pushed out to all the compute
nodes as well?
The program is known to work on other clusters. I finally figured out
what was happening, though: Openmpi was compiled
Jeff,
Thanks for the reply. I have gotten much closer, and it looks like all
wounds were self-inflicted. More below.
On 9 October 2007 at 22:01, Jeff Squyres wrote:
| On Oct 9, 2007, at 3:50 PM, Dirk Eddelbuettel wrote:
|
| > edd@ron:~$ orterun -n 2 --mca mca_component_show_load_errors 1 r -e
On Oct 10, 2007, at 1:27 PM, Dirk Eddelbuettel wrote:
| Does this happen for all MPI programs (potentially only those that
| use the MPI-2 one-sided stuff), or just your R environment?
This is the likely winner.
It seems indeed due to R's Rmpi package. Running a simple mpitest.c
shows no
erro
Brian,
Man you're good! :)
On 10 October 2007 at 13:49, Brian Barrett wrote:
| On Oct 10, 2007, at 1:27 PM, Dirk Eddelbuettel wrote:
| > | Does this happen for all MPI programs (potentially only those that
| > | use the MPI-2 one-sided stuff), or just your R environment?
| >
| > This is the lik
I am seeing the same error, but I am using mpi4py (Lisandro Dalcin's
Python MPI bindings). I don't think that libmpi.so is being dlopen'd
directly at runtime, but, the shared library that is linked at compile
time to libmpi.so is probably being loaded at runtime. The odd thing
is that mpi4py has
On 10 October 2007 at 15:27, Brian Granger wrote:
| I am seeing the same error, but I am using mpi4py (Lisandro Dalcin's
| Python MPI bindings). I don't think that libmpi.so is being dlopen'd
| directly at runtime, but, the shared library that is linked at compile
| time to libmpi.so is probably
Hi,
To the Devs. I just noticed that MPI::BOTTOM requires a cast. Not sure
if that was intended.
Compiling 'MPI::COMM_WORLD.Bcast(MPI::BOTTOM, 1, someDataType, 0);'
results in:
error: invalid conversion from ‘const void*’ to ‘void*’
error: initializing argument 1 of ‘virtual void MPI::Comm::
11 matches
Mail list logo