1.0rc6 is now available; we made some minor fixes in gm, the datatype
engine, and the shared memory btl. I'm not sure if your MX problem
will be fixed, but please give it a whirl. Let us know exactly which
version of MX you are using, too.
http://www.open-mpi.org/software/v1.0/
Than
Hi Jeff,
I tried the rc6 and trunk nightly 8150. I got the same problem. I
copied the message from terminal as below.
[clement@localhost testmpi]$ ompi_info
Open MPI: 1.1a1r8113
Open MPI SVN revision: r8113
Open RTE: 1.1a1r8113
Open RTE SVN revision: r811
On Sun, 13 Nov 2005 17:53:40 -0700, Jeff Squyres
wrote:
I can't believe I missed that, sorry. :-(
None of the btl's are capable of doing loopback communication except
"self." Hence, you really can't run "--mca btl foo" if your app ever
sends to itself -- you really need to run "--mca btl f
Thus far, it appears that moving to MX 1.1.0 didn't change the error
message I've been getting about parts being 'not implemented.'
I also re-provisioned four of the IB nodes (leaving me with 3 four-node
clusters: One using mvapi, one using openib, and one using myrinet)
My mvapi config is
Hi, I'm trying to compile rc6 on my BProc cluster and failed to build as
follows:
make[4]: Entering directory
`/usr/src/redhat/BUILD/openmpi-1.0rc6/orte/mca/pls/bproc'
depbase=`echo pls_bproc.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`; \
if /bin/sh ../../../../libtool --tag=CC --mode=compile gcc -DH
Dear Jeff, Sorry I could not test the cluster earlier but I am having
problems with one compute node.(I will have to replace it!). So I will
have to repeat this test with 15 nodes. Yes I had 4 NIC cards on the
head node and it was only eth3 that was the gigabit NIC which was
communicating to ot
Allan,
If there are 2 Ethernet cards it's better if you can point to the one you
want to use. For that you can modify the .openmpi/mca-params.conf file in
your home directory. All of the options can go in this file so you will
not have to specify them on the mpirun command every time.
I give
On Mon, 14 Nov 2005 10:38:03 -0700, Troy Telford
wrote:
My mvapi config is using the Mellanox IB Gold 1.8 IB software release.
Kernel 2.6.5-7.201 (SLES 9 SP2)
When I ran IMB using mvapi, I received the following error:
***
[0,1,2][btl_mvapi_component.c:637:mca_btl_mvapi_component_progress] e
On Mon, 14 Nov 2005 17:28:15 -0700, Troy Telford
wrote:
I've just finished a build of RC7, so I'll go give that a whirl and
report.
RC7:
With *both* mvapi and openib, I recieve the following when using IMB-MPI1:
***mvapi***
[0,1,3][btl_mvapi_component.c:637:mca_btl_mvapi_component_progr
Troy,
I've been able to reproduce this. Should have this
corrected shortly.
Thanks,
Tim
> On Mon, 14 Nov 2005 10:38:03 -0700, Troy Telford
> wrote:
>
>> My mvapi config is using the Mellanox IB Gold 1.8 IB software release.
>> Kernel 2.6.5-7.201 (SLES 9 SP2)
>>
>> When I ran IMB using mvapi, I
Dear Jeff, I reorganized my cluster and ran the following test with 15
nodes: [allan@a1 bench]$ mpirun -mca btl tcp --mca btl_tcp_if_include
eth1 --prefix /home/allan/openmpi -hostfile aa -np 15 ./xhpl
[0,1,11][btl_tcp_component.c:342:mca_btl_tcp_component_create_instances]
invalid interface "e
Hi George,
I think the confusion was my fault because --mca pml teg did not
produce errors and gave almost the same performance as Mpich2 v 1.02p1.
The reason why I cannot do what you suggest below is because the
.openmpi/mca-params.conf file if I am not mistaken would reside in my
home NF
Hi George,
I think the confusion was my fault because --mca pml teg did not
produce errors and gave almost the same performance as Mpich2 v 1.02p1.
The reason why I cannot do what you suggest below is because the
.openmpi/mca-params.conf file if I am not mistaken would reside in my
home NF
13 matches
Mail list logo