On Tue, Jan 10, 2012 at 10:02 AM, Roberto Rey wrote:
> I'm running some tests on EC2 cluster instances with 10 Gigabit Ethernet
> hardware and I'm getting strange latency results with Netpipe and OpenMPI.
- There are 3 types of instances that can use 10 GbE. Are you using
"cc1.4xlarge", "cc2.8xla
On Thu, Jan 12, 2012 at 16:10, Jeff Squyres wrote:
> It's very strange to me that Open MPI is getting *better* than raw TCP
> performance. I don't have an immediate explanation for that -- if you're
> using the TCP BTL, then OMPI should be using TCP sockets, just like netpipe
> and the others.
On 01/12/2012 08:40 AM, Dave Love wrote:
> Surely this should be on the gridengine list -- and it's in recent
> archives -- but there's some ob-openmpi below. Can Notre Dame not get
> the support they've paid Univa for?
This is, in fact, in the recent gridengine archives. I brought up this
probl
Do you have a stack of where exactly things are seg faulting in
blacs_pinfo?
--td
On 1/13/2012 8:12 AM, Conn ORourke wrote:
Dear Openmpi Users,
I am reserving several processors with SGE upon which I want to run a
number of openmpi jobs, all of which individually (and combined) use
less tha
Dear Openmpi Users,
I am reserving several processors with SGE upon which I want to run a number of
openmpi jobs, all of which individually (and combined) use
less than the reserved number of processors. The code I am using uses
BLACS, and when blacs_pinfo is called I get a seg fault. If the co
Dear OpenMPI,
using MPI_Allgather with MPI_CHAR type, I have a doubt about
null-terminated character. Imaging I want to spawn node names where my
program is running on:
char hostname[MAX_LEN];
char*
hostname_recv_buf=(char*)calloc(num_procs*(MAX_STRING_