[OMPI users] Specifying slots with relative host indexing

2022-12-02 Thread Adams, Brian M via users
We (Dakota project at Sandia) have gotten a lot of mileage with “tiling” multiple MPI executions within a SLURM allocation using the relative host indexing options, mpirun -host +n2,+n3, for instance. (Thanks for the feature!) However, it’s been almost exclusively with openmpi-1.x. I’m attempti

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-30 Thread Adams, Brian M
know how it goes, if you don't mind. It would be nice to know if we actually met your needs, or if a tweak might help make it easier. Thanks Ralph On Jul 30, 2009, at 1:36 PM, Adams, Brian M wrote: Thanks Ralph, I wasn't aware of the relative indexing or sequential mapper capabilit

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-30 Thread Adams, Brian M
Ralph Castain Sent: Thursday, July 30, 2009 12:26 PM To: Open MPI Users Subject: Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation) On Jul 30, 2009, at 11:49 AM, Adams, Brian M wrote: Apologies if I'm being confusing; I'm proba

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-30 Thread Adams, Brian M
ing about oversubscription. > > Does that help? > Ralph > > PS. we dropped that "persistent" operation - caused way too > many problems with cleanup and other things. :-) > > On Jul 29, 2009, at 3:46 PM, Adams, Brian M wrote: > > > Hi Ralph (all), >

Re: [OMPI users] Multiple mpiexec's within a job (schedule within a scheduled machinefile/job allocation)

2009-07-29 Thread Adams, Brian M
Hi Ralph (all), I'm resurrecting this 2006 thread for a status check. The new 1.3.x machinefile behavior is great (thanks!) -- I can use machinefiles to manage multiple simultaneous mpiruns within a single torque allocation (where the hosts are a subset of $PBS_NODEFILE). However, this requir

Re: [OMPI users] OpenMPI runtime-specific environment variable?

2008-10-24 Thread Adams, Brian M
> -Original Message- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain > Sent: Wednesday, October 22, 2008 8:02 AM > To: Open MPI Users > Subject: Re: [OMPI users] OpenMPI runtime-specific > environment variable? > > What I think Brian is tr

Re: [OMPI users] OpenMPI runtime-specific environment variable?

2008-10-21 Thread Adams, Brian M
> I'm not sure I understand the problem. The ale3d program from > LLNL operates exactly as you describe and it can be built > with mpich, lam, or openmpi. Hi Doug, I'm not sure what reply would be most helpful, so here's an attempt. It sounds like we're on the same page with regard to the desire

Re: [OMPI users] OpenMPI runtime-specific environment variable?

2008-10-21 Thread Adams, Brian M
> > Am 21.10.2008 um 18:52 schrieb Ralph Castain: > > > On Oct 21, 2008, at 10:37 AM, Adams, Brian M wrote: > > > >> Doug is right that we could use an additional command line flag to > >> indicate MPI runs, but at this point, we're trying to hide > t

Re: [OMPI users] OpenMPI runtime-specific environment variable?

2008-10-21 Thread Adams, Brian M
t; > On Oct 21, 2008, at 10:37 AM, Adams, Brian M wrote: > > > think it will help here. While MPICH implementations > typically left > > args like -p4pg -p4amslave on the command line, I don't see that > > coming from OpenMPI-launched jobs. > > Really?

Re: [OMPI users] OpenMPI runtime-specific environment variable?

2008-10-21 Thread Adams, Brian M
Thank you Doug, Ralph, and Mattijs for the helpful input. Some replies to Ralph's message and a question inlined here. -- Brian > -Original Message- > From: users-boun...@open-mpi.org > [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain > Sent: Monday, October 20, 2008 5:38 P

[OMPI users] OpenMPI runtime-specific environment variable?

2008-10-20 Thread Adams, Brian M
I work on an application (DAKOTA) that has opted for single binaries with source code to detect serial vs. MPI execution at run-time. While I realize there are many other ways to handle this (wrapper scripts, command-line switches, different binaries for serial vs. MPI, etc.), I'm looking for a

Re: [OMPI users] Torque and OpenMPI 1.2

2007-12-19 Thread Adams, Brian M
Ralph, Thanks for the clarification as I'm dealing with workarounds for this at Sandia as well... I might have missed this earlier in the dialog, but is this capability in the SVN trunk right now, or still on the TODO list? Brian Brian M. Adams, PhD (bria

[OMPI users] OpenMPI with system call -- openib error on SNL tbird

2007-04-16 Thread Adams, Brian M
Hello, I am attempting to port Sandia's DAKOTA code from MVAPICH to the default OpenMPI/Intel environment on Sandia's thunderbird cluster. I can successfully build DAKOTA in the default tbird software environment, but I'm having runtime problems when DAKOTA attempts to make a system call. Typical