Re: [OMPI users] Building Open MPI 1.1.1 on OS X with ifort and icc support

2006-10-07 Thread Jeff Squyres
Thanks for the report!

I've created ticket #483 about this:

http://svn.open-mpi.org/trac/ompi/ticket/483


On 9/29/06 5:54 PM, "Josh Durham"  wrote:

> Below are the changes needed to build OMPI on OSX with ifort and
> icc.  Basically, the Xgrid component doesn't have a libtool tag
> defined for ObjC code.  Adding this makes it consistent with the rest
> of the build - all the other Makefiles has --tag=CC.  This was
> configured with './configure CC=icc CXX=icpc'
> 
> In orte/mca/pls/xgrid/Makefile.in:
> 215c215
> < LTOBJCCOMPILE = $(LIBTOOL) --mode=compile $(OBJC) $(DEFS) \
> ---
>> LTOBJCCOMPILE = $(LIBTOOL) --tag=CC --mode=compile $(OBJC) $(DEFS) \
> 219c219
> < OBJCLINK = $(LIBTOOL) --mode=link $(OBJCLD) $(AM_OBJCFLAGS) \
> ---
>> OBJCLINK = $(LIBTOOL) --tag=CC --mode=link $(OBJCLD) $
> (AM_OBJCFLAGS) \
> 
> I'll leave it up to the developers to figure out how to get the
> automake stuff to generate this properly.  I have no clue.
> 
> - Josh
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems


Re: [OMPI users] A -pernode behavior using torque and openmpi

2006-10-07 Thread Jeff Squyres
Open MPI does not currently have an option to effect this kind of behavior.
It basically assumes that you are going to ask for the right number of slots
for your job.

I'll file a ticket for a future enhancement to add this behavior.


On 10/6/06 11:25 AM, "Maestas, Christopher Daniel" 
wrote:

> Hello,
> 
> I was wondering if openmpi had a -pernode like behavior similar to osc
> mpiexec 
> mpiexec -pernode mpi_hello
> Would launch N mpi processes on N nodes ... No more no less.
> 
> Openmpi already will try and run N*2 nodes if you don't specify -np
> mpirun -np mpi_hello
> Launches N*2 mpi processes on N nodes (when using torque and 2ppn
> specified in your nodes file).  This is good.
> 
> I tried:
> $ mpirun -nooversubscribe mpi_hello
> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
> been used in file rmaps_rr.c at line 116
> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
> been used in file rmaps_rr.c at line 392
> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
> been used in file rmgr_urm.c at line 428
> [dn172:09406] mpirun: spawn failed with errno=-126
> 
> Here's our env:
> $ env | grep ^OM
> OMPI_MCA_btl_mvapi_ib_timeout=18
> OMPI_MCA_btl_mvapi_use_eager_rdma=0
> OMPI_MCA_rmaps_base_schedule_policy=node
> OMPI_MCA_btl_mvapi_ib_retry_count=15
> OMPI_MCA_oob_tcp_include=eth0
> OMPI_MCA_mpi_keep_hostnames=1
> 
> This helps us to simply launch scripts to be generic enough to run on
> 1ppn and 2ppn studies pretty easily.
> 
> 
> echo "Running hello world"
> mpiexec -pernode mpi_hello
> echo "Running hello world 2ppn"
> mpiexec mpi_hello
> ---
> 
> Thanks,
> -cdm
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems


Re: [OMPI users] Orte_error_log w/ ompi 1.1.1 and torque 2.1.2

2006-10-07 Thread Jeff Squyres
Followups on this show that this was caused by accidentally running on a one
node Torque allocation and using the "-nolocal" option to mpirun.  So Open
MPI is doing what it should do (refusing to run), but being less than
helpful about its error message.

I'll file a feature enhancement to see if we can make the resulting error
message a bit more obvious.


On 10/6/06 12:33 PM, "Maestas, Christopher Daniel" 
wrote:

> Has anyone ever seen this?
> ---
> [dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource in file
> base/rmaps_base_node.c at line 153
> [dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource in file
> rmaps_rr.c at line 270
> [dn32:07156] [0,0,0] ORTE_ERROR_LOG: Temporarily out of resource in file
> rmgr_urm.c at line 428
> [dn32:07156] orterun: spawn failed with errno=-3
> ---
> 
> This happened on a 2ppn job using 2 nodes.
> I tried searching the site, but didn't find anything.
> 
> Thanks,
> -cdm
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems


Re: [OMPI users] A -pernode behavior using torque and openmpi

2006-10-07 Thread Ralph Castain
This feature is now available on the trunk - syntax is "-pernode".

In the absence of the number of procs:

"bynode" will launch on all *slots*, with the processes mapped on a bynode
basis

"byslot" will launch on all *slots*, with procs mapped on a byslot basis.

"pernode" will launch one proc/node across all nodes.



On 10/7/06 6:57 AM, "Jeff Squyres"  wrote:

> Open MPI does not currently have an option to effect this kind of behavior.
> It basically assumes that you are going to ask for the right number of slots
> for your job.
> 
> I'll file a ticket for a future enhancement to add this behavior.
> 
> 
> On 10/6/06 11:25 AM, "Maestas, Christopher Daniel" 
> wrote:
> 
>> Hello,
>> 
>> I was wondering if openmpi had a -pernode like behavior similar to osc
>> mpiexec 
>> mpiexec -pernode mpi_hello
>> Would launch N mpi processes on N nodes ... No more no less.
>> 
>> Openmpi already will try and run N*2 nodes if you don't specify -np
>> mpirun -np mpi_hello
>> Launches N*2 mpi processes on N nodes (when using torque and 2ppn
>> specified in your nodes file).  This is good.
>> 
>> I tried:
>> $ mpirun -nooversubscribe mpi_hello
>> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
>> been used in file rmaps_rr.c at line 116
>> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
>> been used in file rmaps_rr.c at line 392
>> [dn172:09406] [0,0,0] ORTE_ERROR_LOG: All the slots on a given node have
>> been used in file rmgr_urm.c at line 428
>> [dn172:09406] mpirun: spawn failed with errno=-126
>> 
>> Here's our env:
>> $ env | grep ^OM
>> OMPI_MCA_btl_mvapi_ib_timeout=18
>> OMPI_MCA_btl_mvapi_use_eager_rdma=0
>> OMPI_MCA_rmaps_base_schedule_policy=node
>> OMPI_MCA_btl_mvapi_ib_retry_count=15
>> OMPI_MCA_oob_tcp_include=eth0
>> OMPI_MCA_mpi_keep_hostnames=1
>> 
>> This helps us to simply launch scripts to be generic enough to run on
>> 1ppn and 2ppn studies pretty easily.
>> 
>> 
>> echo "Running hello world"
>> mpiexec -pernode mpi_hello
>> echo "Running hello world 2ppn"
>> mpiexec mpi_hello
>> ---
>> 
>> Thanks,
>> -cdm
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>