., >1.2 -- I
don't recall which version specifically).
On Nov 27, 2007, at 3:19 PM, Andrew Friedley wrote:
Brock Palen wrote:
What would be a place to look? Should this just be default then
for
OMPI? ompi_info shows the default as 10 seconds? Is that right
'seconds' ?
Brock Palen wrote:
What would be a place to look? Should this just be default then for
OMPI? ompi_info shows the default as 10 seconds? Is that right
'seconds' ?
The other IB guys can probably answer better than I can -- I'm not an
expert in this part of IB (or really any part I guess :).
Brock Palen wrote:
On Nov 27, 2007, at 10:49 AM, Andrew Friedley wrote:
Brock Palen wrote:
On Nov 21, 2007, at 3:39 PM, Andrew Friedley wrote:
If this is what I think it is, try using this MCA parameter:
-mca btl_openib_ib_timeout 20
The user used this option and it allowed the run to
Brock Palen wrote:
On Nov 21, 2007, at 3:39 PM, Andrew Friedley wrote:
If this is what I think it is, try using this MCA parameter:
-mca btl_openib_ib_timeout 20
The user used this option and it allowed the run to complete.
You say its a issue with the fabric ibshowerrors does not show
If this is what I think it is, try using this MCA parameter:
-mca btl_openib_ib_timeout 20
If this fixes it -- I don't fully understand what's going on, but it's
an issue in the IB fabrics itself. Someone else might be able to
explain in more detail..
Andrew
Brian Dobbins wrote:
Hi Brock
)
MCA sds: slurm (MCA v1.0, API v1.0, Component v1.3)
MCA filem: rsh (MCA v1.0, API v1.0, Component v1.3)
Regards,
Mostyn
On Tue, 6 Nov 2007, Andrew Friedley wrote:
All thread support is disabled by default in Open MPI; the uDAPL BTL is
neither thread safe no
All thread support is disabled by default in Open MPI; the uDAPL BTL is
neither thread safe nor makes use of a threaded uDAPL implementation.
For completeness, the thread support is controlled by the
--enable-mpi-threads and --enable-progress-threads options to the
configure script.
The refer
Troy Telford wrote:
On Monday 22 October 2007, Don Kerr wrote:
Couple of things.
With linux I believe you need the interface instance in the 7th field of
the /etc/dat.conf file.
example:
InfiniHost0 u1.1 nonthreadsafe default /usr/lib64/libdapl.so ri.1.1 " " " "
should be
InfiniHost0 u1.1 n
to point you in the right direction...
On Aug 15, 2007, at 11:44 AM, Andrew Friedley wrote:
I'm helping someone at LLNL get running with Open MPI, and the current
snag seems to be that stdin redirection doesnt work right. A quick
look
at the orterun manpage indicates something like
I'm helping someone at LLNL get running with Open MPI, and the current
snag seems to be that stdin redirection doesnt work right. A quick look
at the orterun manpage indicates something like this should work:
mpirun -np 1 cat < foo.txt
If I run just say on the head node without any slurm allo
trying to use two different Fortran
compilers
at the same time?
On Tue, 2007-06-19 at 20:04 -0400, Jeff Squyres wrote:
I have not seen this before -- did you look in the libtool
documentation? ("See the libtool documentation for more
information.")
On Jun 19, 2007, at 6:46 PM,
I'm trying to build Open MPI v1.2.2 with gcc/g++/g77 3.4.4 and pathf90
v2.4 on a linux system, and see this error when compiling ompi_info:
/bin/sh ../../../libtool --tag=CXX --mode=link g++ -g -O2
-finline-functions -pthread -export-dynamic -o ompi_info
components.o ompi_info.o output.o p
, and 4 nodes with 2
processes per node and --mca btl udapl,self. I didn't encouter any problems.
The comment above line 197 says that dat_ep_query() returns wrong port
numbers (which it does indeed), but I can't find any call to
dat_ep_query() in the uDAPL BTL code. Maybe the comment i
You say that fixes the problem, does it work even when running more than
one MPI process per node? (that is the case the hack fixes) Simply
doing an mpirun with a -np paremeter higher than the number of nodes you
have set up should trigger this case, and making sure to use '-mca btl
udapl,self
Which uDAPL implementation are you using, over what sort of network?
I'm guessing OpenIB/InfiniBand, but want to make sure.
One other thing I noticed, you say native IB works, yet looking at your
ompi_info/config.log neither OpenIB nor MVAPI support was enabled.
Andrew
Andreas Kuntze wrote:
round.
Andrew
Thanks
-DON
Andrew Friedley wrote On 08/10/06 17:44,:
Donald Kerr wrote:
Hey Andrew I have one for you...
I get the following error message on a node that does not have any IB
cards
--
[0,1,0]: uDAPL on host
of the releases but is now
in the trunk
- Galen
On Aug 10, 2006, at 3:44 PM, Andrew Friedley wrote:
Donald Kerr wrote:
Hey Andrew I have one for you...
I get the following error message on a node that does not have any
IB cards
Donald Kerr wrote:
Hey Andrew I have one for you...
I get the following error message on a node that does not have any IB cards
--
[0,1,0]: uDAPL on host burl-ct-v40z-0 was unable to find any NICs.
Another transport will be
Hopefully some of the other developers will correct me if I am wrong..
Brock Palen wrote:
I had a user ask this, its not a very practical question but I am
curious.
This is good information for the archives :)
OMPI uses a 'fast' network if its available. (IB, GM, etc) I also
infer that
Wen Long at UMCES/HPL wrote:
Any people have installed open MPI on a Dual Core desktop or laptop? Such as Intel Centrino Duo ? or it is totally impossible?
Have you encountered a particular problem? If, we'd like to know about it.
Dual core machines behave very much like SMP machines. We
Paul wrote:
Somebody call orkin. ;-P
Well I tried running it with things set as noted in the bug report.
However it doesnt change anything on my end. I am willing to do any
verification you guys need (time permitting and all). Anything special
needed to get mpi_latency to compile ? I can run t
Hello,
Can you give us more details on the problem? The exact error message,
as well as the contents of your hostfile will help. You should check
out our FAQ as well, as it likely will help you solve your problem:
http://www.open-mpi.org/faq/
Particularly the sections 'Running MPI jobs' an
I just committed another fix to the trunk for a problem you are going
to run into next - the same problem comes up again in two more places.
I'll ask Tim/Jeff to apply this fix to the v1.0 branch, here are
patches:
Index: orte/runtime/orte_init_stage1.c
==
23 matches
Mail list logo