On Feb 7, 2006, at 5:37 PM, Bill Saphir wrote:
In an attempt to limit runtime dependencies, I am using static
libraries
where possible. Under OSX (10.4.4) I get the following error when I
try to
link my application:
/usr/bin/ld: multiple definitions of symbol _munmap
/usr/lib/gcc/powerpc-ap
> As we plan to drop all support for the old
> > generation of PML/PTL, I don't think is a wise idea to spend time on
> > the openib PTL to make it working with uniq ...
> >
> >Thanks,
> > george.
> >
>
> With the change to ob1/BTLs, there was also a refactoring of data
> structures that
>> For Heroic latencies on IB we would need to use small message RDMA and
>> poll each peers dedicated memory region for completion.
>
> Well, I tried to play around with the eager_limit, min_rdma, etc. I did
> not see the latency of messages of a given size be lowered by changing
> the tresholds
>
> On Feb 8, 2006, at 7:06 PM, Jean-Christophe Hugly wrote:
>
>> But should I understand from all this that the "direct" mode will
>> never
>> actually work ? It seems that if you need at least two transports,
>> then
>> none of them can be the hardwired unique one, right ? Unless there's a
>> bui
On Feb 8, 2006, at 7:06 PM, Jean-Christophe Hugly wrote:
But should I understand from all this that the "direct" mode will
never
actually work ? It seems that if you need at least two transports,
then
none of them can be the hardwired unique one, right ? Unless there's a
built-in switch bet
> you need to specify both the transport and self, such as:
> mpirun -mca btl self,tcp
I found that the reason why I was no-longer able to run without openib
was that I had some openib-specific tunables on the command line. I
expected the params would get ignored, but instead it just sat there.
Sorry, more questions to answer:
On the other hand I am not sure it could even work at all, as whenever
I
tried at run-time to limit the list to just one transport (be it tcp or
openib, btw), mpi apps would not start.
you need to specify both the transport and self, such as:
mpirun -mca btl s
Hi Jean,
You probably are not seeing overhead costs so much as you are seeing
the difference between using send/recv for small messages, which Open
MPI uses, and RDMA for small messages. If you are comparing against
another implementation that uses RDMA for small messages then yes, you
will
Hi guys,
Does someone know what the framework costs in term of latency ?
Righ now the latency I get with the openib btl is not great: 5.35 us. I
was looking at what I could do to get it down. I tried to get openib be
the only btl but the build process refused.
On the other hand I am not sure it
Dear Brian,
The original poster intended to run migrate-n in parallel mode, but the
stdout fragment shows that the program was compiled for a non-MPI
architecture
(either single CPU or SMP pthreads) [I talked with him list-offline
and it used pthreads].
A version for parallel runs shows this
Jeff,
I just tried the latest trunk. Indeed, things now work properly.
Thanks!
Kostya
--- Jeff Squyres wrote:
> Konstantin --
>
> This problem has been fixed on the trunk; it will probably take us a
>
> few days to get it committed on the release branch (v1.0), but it
> will definite
I think we fixed this over this last weekend. I believe the problem
was our mis-handling of standard input in some cases. I believe I was
able to get the application running (but I could be fooling myself
there...). Could you download the latest nightly build from the URL
below and see if
On Feb 6, 2006, at 2:14 PM, Glenn Morris wrote:
mpirun (v1.0.1) sets the umask to 0, and hence creates world-writable
output files. Interestingly, adding the -d option to mpirun makes this
problem go away. To reproduce:
mpirun -np 1 --hostfile ./hostfile --mca pls_rsh_agent ssh ./a.out
where a
I tested this example with hostname before and it worked well:
the hostfile contains only 2 lines
pc86
pc92
and the user wolf doesn't need a password when linking to the other
pc.the user wolf have the same uid and gui on both pc.
i have also another question: is it possible to use mpi to com
Konstantin --
This problem has been fixed on the trunk; it will probably take us a
few days to get it committed on the release branch (v1.0), but it
will definitely be included in the upcoming v1.0.2.
Would you mind trying a nightly trunk snapshot to ensure that we have
fixed the problem?
On Tue, 7 Feb 2006, Jean-Christophe Hugly wrote:
On Thu, 2006-02-02 at 21:49 -0700, Galen M. Shipman wrote:
I suspect the problem may be in the bcast,
ompi_coll_tuned_bcast_intra_basic_linear. Can you try the same run using
mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np
16 matches
Mail list logo