I have installed openmpi 1.4.1 locally for one user on a cluster,
where some other mpi were installed.
when I try to run an executable through mpirun (I am running the BLACS
tester) I get
xFbtest_MPI-LINUX-0: error while loading shared libraries: liblam.so.
0: cannot open shared object file:
My question is why? If you are willing to reserve a chunk of your
machine for yet-to-exist tasks, why not just create them all at mpirun
time and slice and dice your communicators as appropriate?
On Thu, 2010-01-28 at 09:24 +1100, Jaison Paul wrote:
> Hi, I am just reposting my early query once
OK, so please stop me if you have heard this before, but I couldn’t find
anything in the archives that addressed my situation.
I have a Beowulf cluster where ALL the node are PS3s running Yellow Dog
Linux 6.2 and a host (server) that is a Dell i686 Quad-core running Fedora
Core 12. After a fail
It sounds to me a bit like asking to be born before your mother.
Unless I misunderstand the question...
Douglas.
On Thu, Jan 28, 2010 at 09:24:29AM +1100, Jaison Paul wrote:
> Hi, I am just reposting my early query once again. If anyone one can
> give some hint, that would be great.
>
> Thanks,
I cannot resist:
Jaison -
The MPI_Comm_spawn call specifies what you want to have happen. The child
launch is what does happen.
If we can come up with a way to have things happen correctly before we know
what it is that we want to have happen, the heck with this HPC stuff. Lets
get together and
I can't imagine how you would do that - only thing I can think of would be to
start your "child" processes as one job, then start your "parent" processes and
have them do an MPI_Comm_join with the child job.
That said, I can't imagine that comm_spawn is -that- slow to make much
difference to an
Hi, I am just reposting my early query once again. If anyone one can
give some hint, that would be great.
Thanks, Jaison
ANU
Jaison Paul wrote:
Hi All,
I am trying to use MPI for scientific High Performance (hpc)
applications. I use MPI_Spawn to create child processes. Is there a
way to sta
You could also rule Ethernet (TCP) out. E.g.,
mpirun --mca btl self,openib ./a.out
Or, if you wanted the opposite (Ethernet/TCP, but not IB), then
mpirun --mca btl self,tcp ./a.out
If an infiniband network is configured successfully, how to confirm
that Open MPI is using infiniband, not othe
Thanks Brett for the useful information.
On Wed, Jan 27, 2010 at 12:40 PM, Brett Pemberton wrote:
>
> - "Sangamesh B" wrote:
>
> > Hi all,
> >
> > If an infiniband network is configured successfully, how to confirm
> > that Open MPI is using infiniband, not other ethernet network
> > availa
- "Sangamesh B" wrote:
> Hi all,
>
> If an infiniband network is configured successfully, how to confirm
> that Open MPI is using infiniband, not other ethernet network
> available?
>
At a low level simplistic way, how about:
[root@tango003 ~]# lsof | grep /dev/infiniband
namd2 7271
Hi all,
If an infiniband network is configured successfully, how to confirm
that Open MPI is using infiniband, not other ethernet network available?
In earlier versions, I've seen if OMPI is running on ethernet, it was giving
warning - its runnig on slower network. Is this available in 1.3
11 matches
Mail list logo