Thanks for your reply, but the program is running on TCP
interconnect with same datasize and also on IB with small datasize say 1MB. So
i dont think problem is in OpenMPI, it has to do something with IB logic, which
probably doesnt work well with threads.I also tried the program with
MPI_THR
On 11/1/07, Oleg Morajko wrote:
>
> I'm not sure if you understood my question. The case is not trivial at all
> or I miss something important.
> Try to design this derived datatype and you will understand my point.
Sorry :-(
I blearily went through your query and just wondered if you might hav
On Oct 31, 2007, at 9:52 PM, Neeraj Chourasia wrote:
but the program is running on TCP interconnect with same
datasize and also on IB with small datasize say 1MB. So i dont
think problem is in OpenMPI, it has to do something with IB logic,
which probably doesnt work well with threads.
On Oct 31, 2007, at 5:52 PM, Oleg Morajko wrote:
Let me clarify the context of the problem. I'm implementing a MPI
piggyback mechanism that should allow for attaching extra data to
any MPI message. The idea is to wrap MPI communication calls with
PMPI interface (or with dynamic instrumentat
Thank you Jeff for your opinion. It was really helpful.
Concerning reduce operation in case of small messages: it is possible to
wrap also a reduction operator
and make it work with wrapped data. This operator could reduce only the
original data and simply collect the piggybacked data (instead of
This page has information on how to increase the limit of open files.
Pass 1 and 3 don't require reboot.
http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html
2007/10/31, George Bosilca :
>
> For some version of Open MPI (recent versions) you can use the
> btl_tcp_disable_family MCA paramete
On Wed, Oct 31, 2007 at 06:55:47PM -0400, Tim Prins wrote:
Hi!
> I seem to recall (though this may have changed) that if a system supports
> ipv6, we may open both ipv4 and ipv6 sockets. This can be worked around by
> configuring Open MPI with --disable-ipv6
IPv6 is only an issue when talking
This is not OpenMPI specific - but maybe somebody on the list can give a
hint.
I start a parallel job with:
mpirun -np 19 -nolocal -machinefile machinefile bin/getm_prod_IFORT.0096x0096
everything starts OK and the simulation carries on 2+ hours of
wall clock time - then suddenly without a trace
On Wed, Oct 31, 2007 at 06:45:10PM -0400, Tim Prins wrote:
> Hi Jon,
>
> Just to make sure, running 'ompi_info' shows that you have the udapl btl
> installed?
Yes, I get the following:
# ompi_info | grep dapl
MCA btl: udapl (MCA v1.0, API v1.0, Component v1.2.5)
If I do not inc
There are two things that are reflected in your email.
1. You can run Open MPI (or at least ompi_info) on the head node, and
udapl is in the list of BTL. This means the head node has all
libraries required to load udapl, and your LD_LIBRARY_PATH is
correctly configured on the head node.
2
10 matches
Mail list logo