Std TCP/IP stack
it hung with an unknown but large(ish) quantity of data. when I ran just one
Bcast it was fine but Bcasts in lots in separate MPI_WORLD's hung. - All the
details are in some recent posts.
I could not figure it out and moved back to my PVM solution.
--- On Wed, 25/8/10, Rahu
On Tue, Aug 24, 2010 at 4:58 PM, Jeff Squyres wrote:
> Are all the eth0's on one subnet and all the eth2's on a different subnet?
>
> Or are all eth0's and eth2's all on the same subnet?
Thanks Jeff! Different subnets. All 10GigE's are on 192.168.x.x and
all 1GigE's are on 10.0.x.x
e.g.
On Aug 24, 2010, at 1:58 PM, Rahul Nabar wrote:
> There are a few unusual things about the cluster. We are using a
> 10GigE ethernet fabric. Each node has dual eth adapters. One 1GigE and
> the other 10GigE. These are on seperate subnets although the order of
> the eth interfaces is variable. i.e.
On Mon, Aug 23, 2010 at 9:43 PM, Richard Treumann wrote:
> Bugs are always a possibility but unless there is something very unusual
> about the cluster and interconnect or this is an unstable version of MPI, it
> seems very unlikely this use of MPI_Bcast with so few tasks and only a 1/2
> MB messa
Hi Jeff
On 08/24/10 15:24, Jeff Squyres wrote:
I'm a little confused by your configure line:
./configure --prefix=/g/software/openmpi-1.4.3a1r23542/gcc-4.1.2 2
--enable-cxx-exceptions CFLAGS=-O2 CXXFLAGS=-O2 FFLAGS=-O2 FCFLAGS=-O2
"oppss" that '2' was some leftover character after I edited
I'm a little confused by your configure line:
./configure --prefix=/g/software/openmpi-1.4.3a1r23542/gcc-4.1.2 2
--enable-cxx-exceptions CFLAGS=-O2 CXXFLAGS=-O2 FFLAGS=-O2 FCFLAGS=-O2
What's the lone "2" in the middle (after the prefix)?
With that extra "2", I'm not able to get configure to com
On 08/24/10 14:22, Michael E. Thomadakis wrote:
Hi,
I used a 'tee' command to capture the output but I forgot to also redirect
stderr to the file.
This is what a fresh make gave (gcc 4.1.2 again) :
--
ompi_debuggers.c:81: error:
Ummmthe configure log terminates normally, indicating it configured fine.
The make log ends, but with no error shown - everything was building just fine.
Did you maybe stop it before it was complete? Run out of disk quota? Or...?
On Aug 24, 2010, at 1:06 PM, Michael E. Thomadakis wrote:
>
Hi Ralph,
I tried to build 1.4.3.a1r23542 (08/02/2010) with
./configure --prefix="/g/software/openmpi-1.4.3a1r23542/gcc-4.1.2 2"
--enable-cxx-exceptions CFLAGS="-O2" CXXFLAGS="-O2" FFLAGS="-O2"
FCFLAGS="-O2"
with the GCC 4.1.2
miket@login002[pts/26]openmpi-1.4.3a1r23542 $ gcc -v
Using bui
On Mon, Aug 23, 2010 at 9:43 PM, Richard Treumann wrote:
> Bugs are always a possibility but unless there is something very unusual
> about the cluster and interconnect or this is an unstable version of MPI, it
My MPI version is 1.4.1. This isn't the latest but still fairly
recent. So I assume th
On Mon, Aug 23, 2010 at 8:39 PM, Randolph Pullen
wrote:
>
> I have had a similar load related problem with Bcast.
Thanks Randolph! That's interesting to know! What was the hardware you
were using? Does your bcast fail at the exact same point too?
>
> I don't know what caused it though. With thi
On Mon, Aug 23, 2010 at 6:39 PM, Richard Treumann wrote:
> It is hard to imagine how a total data load of 41,943,040 bytes could be a
> problem. That is really not much data. By the time the BCAST is done, each
> task (except root) will have received a single half meg message form one
> sender. Th
Yes, that's fine. Thx!
On Aug 24, 2010, at 9:02 AM, Philippe wrote:
> awesome, I'll give it a spin! with the parameters as below?
>
> p.
>
> On Tue, Aug 24, 2010 at 10:47 AM, Ralph Castain wrote:
>> I think I have this working now - try anything on or after r23647
>>
>>
>> On Aug 23, 2010, a
awesome, I'll give it a spin! with the parameters as below?
p.
On Tue, Aug 24, 2010 at 10:47 AM, Ralph Castain wrote:
> I think I have this working now - try anything on or after r23647
>
>
> On Aug 23, 2010, at 1:36 PM, Philippe wrote:
>
>> sure. I took a guess at ppn and nodes for the case whe
On Aug 24, 2010, at 10:27 AM, 陈文浩 wrote:
> Dear OMPI users,
>
> I configured and installed OpenMPI-1.4.2 and BLCR-0.8.2. (blade01 �C blade10,
> nfs)
> BLCR configure script: ./configure �Cprefix=/opt/blcr �Cenable-static
> After the installation, I can see the ‘blcr’ module loaded correctly (l
I think I have this working now - try anything on or after r23647
On Aug 23, 2010, at 1:36 PM, Philippe wrote:
> sure. I took a guess at ppn and nodes for the case where 2 processes
> are on the same node... I dont claim these are the right values ;-)
>
>
>
> c0301b10e1 ~/mpi> env|grep OMPI
>
Dear OMPI users,
I configured and installed OpenMPI-1.4.2 and BLCR-0.8.2. (blade01 �C
blade10, nfs)
BLCR configure script: ./configure �Cprefix=/opt/blcr �Cenable-static
After the installation, I can see the ‘blcr’ module loaded correctly
(lsmod | grep blcr). And I can also run ‘cr_run’, ‘cr_
Terry Dontje wrote:
Jeff Squyres wrote:
You should be able to run "./configure --help" and see a lengthy help message that includes all the command line options to configure.
Is that what you're looking for?
No, he wants to know what configure options were used with some
binarie
You should be able to run "./configure --help" and see a lengthy help message
that includes all the command line options to configure.
Is that what you're looking for?
No, he wants to know what configure options were used with some binaries.
Yes Terry - I want to know what configure optio
Jeff Squyres wrote:
You should be able to run "./configure --help" and see a lengthy help message
that includes all the command line options to configure.
Is that what you're looking for?
No, he wants to know what configure options were used with some binaries.
--td
On Aug 24, 2010, at 7
You should be able to run "./configure --help" and see a lengthy help message
that includes all the command line options to configure.
Is that what you're looking for?
On Aug 24, 2010, at 7:40 AM, Paul Kapinos wrote:
> Hello OpenMPI developers,
>
> I am searching for a way to discover _all_ c
Hello OpenMPI developers,
I am searching for a way to discover _all_ configure options of an
OpenMPI installation.
Background: in a existing installation, the ompi_info program helps to
find out a lot of informations about the installation. So, "ompi_info
-c" shows *some* configuration optio
22 matches
Mail list logo