Re: [OMPI users] OpenMPI / SLURM -> Send/Recv blocking

2012-02-02 Thread Jeff Squyres
When you run without a hostfile, you're likely only running on a single node via shared memory (unless you're running inside a SLURM job, which is unlikely, given the context of your mails). When you're running in SLURM, I'm guessing that you're running across multiple nodes. Are you using T

Re: [OMPI users] Using physical numbering in a rankfile

2012-02-02 Thread Ralph Castain
Actually, that's not true - the 1.5 series technically still supports assignment to physical cpus. However, it is never really tested and very unusual for someone to use, so I suspect it is broken. I very much doubt anyone will fix it. Also, be aware that physical cpu assignments are not suppor

Re: [OMPI users] Using physical numbering in a rankfile

2012-02-02 Thread teng ma
I made a mistake in the previous reply. You can use two ways here like: rank 0=host1 slot=0 rank 1=host1 slot=2 rank 2=host1 slot=4 rank 3=host1 slot=6 rank 4=host1 slot=1 rank 5=host1 slot=3 rank 6=host1 slot=5 rank 7=host1 slot=7 or rank 0=host1 slot=0:0 rank 1=host1 slot=0:1 rank 2=host1 slot=

Re: [OMPI users] Using physical numbering in a rankfile

2012-02-02 Thread teng ma
Just remove p in your rankfile like rank 0=host1 slot=0:0 rank 1=host1 slot=0:2 rank 2=host1 slot=0:4 rank 3=host1 slot=0:6 rank 4=host1 slot=1:1 rank 5=host1 slot=1:3 rank 6=host1 slot=1:5 rank 7=host1 slot=1:7 Teng 2012/2/2 François Tessier > Hello, > > I need to use a rankfile with openMPI

[OMPI users] Using physical numbering in a rankfile

2012-02-02 Thread François Tessier
Hello, I need to use a rankfile with openMPI 1.5.4 to do some tests on a basic architecture. I'm using a node for which lstopo returns that : Machine (24GB) NUMANode L#0 (P#0 12GB) Socket L#0 + L3 L#0 (8192KB) L2 L#0 (256KB) + L1 L#0 (32KB) + Core L#0 + PU L#0 (P#0

Re: [OMPI users] OpenMPI / SLURM -> Send/Recv blocking

2012-02-02 Thread adrian sabou
Hi,   I have disabled iptables on all nodes using:   iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT   My problem is still there. I have re-enabled iptables. The c

Re: [OMPI users] OpenMPI / SLURM -> Send/Recv blocking

2012-02-02 Thread Jeff Squyres
Have you disabled iptables (firewalling) on your nodes? Or, if you want to leave iptables enabled, set it such that all nodes in your cluster are allowed to open TCP connections from any port to any other port. On Feb 2, 2012, at 4:49 AM, adrian sabou wrote: > Hi, > > The only example that

Re: [OMPI users] Error building Openmpi (configure: error: C compiler cannot create executables)

2012-02-02 Thread Jeff Squyres
Both icc and gcc seem to be broken on your system; they're not creating executables. You can look in config.log for more details about what is failing. But basically, configure is trying to compile a simple "hello world"-like C program, and it's failing. You might want to try trying to compil

[OMPI users] Error building Openmpi (configure: error: C compiler cannot create executables)

2012-02-02 Thread Syed Ahsan Ali
Dear All, I have been stuck in installation of openmpi1.4.2 on RHEL5.2 with ifort and icc.I get following error while configuring, Please help. [precis@precis2 openmpi-1.4.2]$ ./build.sh checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... ye

Re: [OMPI users] OpenMPI / SLURM -> Send/Recv blocking

2012-02-02 Thread adrian sabou
Hi, The only example that works is hello_c.c. All others (that use MPI_Send and MPI_Recv)(connectivity_c.c and ring_c.c) block after the first MPI_Send / MPI_Recv (although the first Send/Receive pair works well for all processes, subsequent Send/Receive pairs block). My slurm version is 2.1.0