nly has those 2 devices. (all of the above assume that all your eth0's are
> on one subnet, all your eth1's are on another subnet, ...etc.)
>
> Does that work for you?
>
>
>
> On Aug 25, 2009, at 7:14 PM, Jayanta Roy wrote:
>
> Hi,
>>
>> I am using
Hi,
I am using Openmpi (version 1.2.2) for MPI data transfer using non-blocking
MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca
btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer
within 48 nodes. Now I have added eth2 and eth3 links on the 32 compute
nodes. My aim
Hi,
I am using Openmpi (version 1.2.2) for MPI data transfer using non-blocking
MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca
btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer
within 48 nodes. Now I have added eth2 and eth3 links on the 32 compute
nodes. My aim
Hi,
I was trying to install openmpi-1.2.2 under 2.4.32 kernel.
./configure --prefix=/mnt/shared/jroy/openmpi-1.2.2/ CC=icc CXX=icpc
F77=ifort FC=ifort
make all install
It installed successfully, but during mpirun I got...
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile test_nodes
.
Dear Rainer and Adrian,
Thank you lot for the help. It works. I was trying this for long time but
didn't notice the mistakes. I can't understand how can I overlooked that!
Regards,
Jayanta
On 5/15/07, Adrian Knoth wrote:
On Mon, May 14, 2007 at 11:59:18PM +0530, Jayanta Roy wr
Hi,
In my 4 nodes cluster I want to run two MPI_Reduce on two communicators (one
using Node1, Node2 and other using Node3, Node4).
Now to create communicator I used ...
MPI_Comm MPI_COMM_G1, MPI_COMM_G2;
MPI_Group g0, g1, g2;
MPI_Comm_group(MPI_COMM_WORLD,&g0);
MPI_Group_incl(g0,g_size,&r_array[0
Hi,
To optimize our network throughput we set jumbo frame=8000. The transfers
are going smoothly. But after few minitues we are seeing drastic drop in
network throughput, looks like a problem of deadlock in network transfer
speed (slow down by a factor of 100!). This situation is not happening if
ct 23, 2006, at 4:56 AM, Jayanta Roy wrote:
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of
following
runs.
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
mpirun --mca btl_tcp_
w
routines in ipv4 processing and recompile the Kernel, if you are familiar
with Kernel building and your OS is Linux.
On 10/23/06, Jayanta Roy wrote:
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of foll
Hi,
Sometime before I have posted doubts about using dual gigabit support
fully. See I get ~140MB/s full duplex transfer rate in each of following
runs.
mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out
mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile host
Hi,
I was running mpirun in the linux cluster we have.
mpirun -n 5 -bynode -hostfile test_nodes a.out
Sometime occationaly after MPI initialization I have the following error..
rank: 1 of: 5
rank: 4 of: 5
rank: 3 of: 5
rank: 0 of: 5
rank: 2
ou are going to be
limited, also memory bandwidth could also be the bottleneck.
Thanks,
Galen
Jayanta Roy wrote:
Hi,
In between two nodes I have dual Gigabit ethernet full duplex links. I was
doing benchmarking using non-blocking MPI send and receive. But I am
getting only speed corresponds
the
ports, then why I am not getting full throughput from dual Gigabit
ethernet ports? Can anyone please help me in this?
Regards,
Jayanta
~~~~
Jayanta Roy
National Centre for Radio Astrophysics | Phone : +91-20-25697107
T
13 matches
Mail list logo