It might also be interrupt flooding, you should check your CPU loads
while your tests are running. GigE has an optional 9000-byte packet
size to cut down on the number of interrupts the CPU receives.
Typically it gets an interrupt for each packet that comes in, and if
you're at a standard 1500
Hi Allan,
This suggest that your chipset is not able to handle the full PCI-E
speed on more than 3 ports. This usually depends on the way the PCI-E
links are wired trough the ports and the capacity of the chipset
itself. As an exemple we were never able to reach fullspeed
performance with
Hi George, The following test peaks at 8392Mpbs: mpirun --prefix
/opt/opnmpi124b --host a1,a1 -mca btl tcp,sm,self -np 2 ./NPmpi on a1
and on a2
mpirun --prefix /opt/opnmpi124b --host a2,a2 -mca btl tcp,sm,self -np 2 ./NPmpi
gives 8565Mbps
--(a)
on a1:
mpirun --prefix /opt/opnmpi124b --host a
HTML attachment scrubbed and removed
--
Message: 2
Date: Sun, 16 Dec 2007 18:49:30 -0500
From: Allan Menezes
Subject: [OMPI users] Gigabit ethernet (PCI Express) and openmpi
v1.2.4
To: us...@open-mpi.org
Message-ID: <4765b98a.30...@sympatico.ca>
Content-Typ
Hi Marco and Jeff,
My own knowledge of OpenMPI's internals is limited, but I thought I'd add
my less-than-two-cents...
> I've found only a way in order to have tcp connections binded only to
> > the eth1 interface, using both the following MCA directives in the
> > command line:
> >
> > mpirun
You should run a shared memory test, to see what's the max memory
bandwidth you can get.
Thanks,
george.
On Dec 17, 2007, at 7:14 AM, Gleb Natapov wrote:
On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
Hi,
How many PCI-Express Gigabit ethernet cards does OpenMPI version
On Dec 17, 2007, at 8:35 AM, Marco Sbrighi wrote:
I'm using Open MPI 1.2.2 over OFED 1.2 on an 256 nodes, dual Opteron,
dual core, Linux cluster. Of course, with Infiniband 4x interconnect.
Each cluster node is equipped with 4 (or more) ethernet interface,
namely 2 gigabit ones plus 2 IPoIB. Th
On 12/17/07 8:19 AM, "Elena Zhebel" wrote:
> Hello Ralph,
>
> Thank you for your answer.
>
> I'm using OpenMPI 1.2.3. , compiler glibc232, Linux Suse 10.0.
> My "master" executable runs only on the one local host, then it spawns
> "slaves" (with MPI::Intracomm::Spawn).
> My question was: how
If you care, this is actually the result of a complex issue that was
just recently discussed on the OMPI devel list. You can see a full
explanation there if you're interested.
On Dec 17, 2007, at 10:46 AM, Brian Granger wrote:
This should be fixed in the subversion trunk of mpi4py. Can y
This should be fixed in the subversion trunk of mpi4py. Can you do an
update to that version and retry. If it still doesn't work, post to
the mpi4py list and we will see what we can do.
Brian
On Dec 17, 2007 8:25 AM, de Almeida, Valmor F. wrote:
>
> Hello,
>
> I am getting these messages (belo
Hello,
I am getting these messages (below) when running mpi4py python codes.
Always one message per mpi process. The codes seem to run correctly. Any
ideas why this is happening and how to avoid it?
Thanks,
--
Valmor de Almeida
>mpirun -np 2 python helloworld.py
[xeon0:05998] mca: base: compo
Hello Ralph,
Thank you for your answer.
I'm using OpenMPI 1.2.3. , compiler glibc232, Linux Suse 10.0.
My "master" executable runs only on the one local host, then it spawns
"slaves" (with MPI::Intracomm::Spawn).
My question was: how to determine the hosts where these "slaves" will be
spawned?
On 12/12/07 5:46 AM, "Elena Zhebel" wrote:
>
>
> Hello,
>
>
>
> I'm working on a MPI application where I'm using OpenMPI instead of MPICH.
>
> In my "master" program I call the function MPI::Intracomm::Spawn which spawns
> "slave" processes. It is not clear for me how to spawn the
Dear Open MPI developers,
I'm using Open MPI 1.2.2 over OFED 1.2 on an 256 nodes, dual Opteron,
dual core, Linux cluster. Of course, with Infiniband 4x interconnect.
Each cluster node is equipped with 4 (or more) ethernet interface,
namely 2 gigabit ones plus 2 IPoIB. The two gig are named et
On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
> Hi,
> How many PCI-Express Gigabit ethernet cards does OpenMPI version 1.2.4
> support with a corresponding linear increase in bandwith measured with
> netpipe NPmpi and openmpi mpirun?
> With two PCI express cards I get a B/W of 1
15 matches
Mail list logo