Re: [OMPI users] MPI-2.2: do you care?

2010-10-26 Thread Douglas Guptill
be glad to see more emphasis on stability in OpenMPI (where stability means absence of bugs) than on new features. I am still using OpenMPI-1.2.8 Just my $0.02, Douglas. -- Douglas Guptill voice: 902-461-9749 Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca Oceanography Department fax: 902-494-3877 Dalhousie University Halifax, NS, B3H 4J1, Canada

Re: [OMPI users] spin-wait backoff

2010-09-02 Thread Douglas Guptill
on to this, see http://www.open-mpi.org/community/lists/users/2010/07/13731.php HTH, Douglas. -- Douglas Guptill voice: 902-461-9749 Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca Oceanography Department fax: 902-494-3877 Dalhousie University Halifax, NS, B3H 4J1, Canada

Re: [OMPI users] Do MPI calls ever sleep?

2010-07-21 Thread Douglas Guptill
1.0 I use these with OpenMPI-1.2.8. I have not tried -mca yield_when_idle 1; which may not be in 1.2.8. Not sure. Hope that helps Douglas. -- Douglas Guptill voice: 902-461-9749 Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca Oceanography Departm

Re: [OMPI users] first cluster

2010-07-15 Thread Douglas Guptill
On Wed, Jul 14, 2010 at 04:27:11PM -0400, Jeff Squyres wrote: > On Jul 9, 2010, at 12:43 PM, Douglas Guptill wrote: > > > After some lurking and reading, I plan this: > > Debian (lenny) > > + fai - for compute-node operating system install > >

[OMPI users] first cluster [was trouble using openmpi under slurm]

2010-07-09 Thread Douglas Guptill
On Thu, Jul 08, 2010 at 09:43:48AM -0400, Gus Correa wrote: > Douglas Guptill wrote: >> On Wed, Jul 07, 2010 at 12:37:54PM -0600, Ralph Castain wrote: >> >>> Noafraid not. Things work pretty well, but there are places >>> where things just don't mesh

Re: [OMPI users] trouble using openmpi under slurm

2010-07-07 Thread Douglas Guptill
uld I be looking at Torque instead for a queue manager? Suggestions appreciated, Douglas. -- Douglas Guptill voice: 902-461-9749 Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca Oceanography Department fax: 902-494-3877 Dalhousie

Re: [OMPI users] How do I run OpenMPI safely on a Nehalem standalone machine?

2010-05-06 Thread Douglas Guptill
with. I have been tempted to try and duplicate your problem. Would that be a helpful experiment? gcc, OpenMPI 1.4.1, IIRC ? Regards, Douglas. -- Douglas Guptill voice: 902-461-9749 Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca Oceanography D

Re: [OMPI users] How do I run OpenMPI safely on a Nehalem standalone machine?

2010-05-05 Thread Douglas Guptill
and "data loss" for 1.3.x, I put aside thoughts of upgrading. -- Douglas Guptill voice: 902-461-9749 Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca Oceanography Department fax: 902-494-3877 Dalhousie University Halifax, NS, B3H 4J1, Canada

Re: [OMPI users] How do I run OpenMPI safely on a Nehalem standalone machine?

2010-05-04 Thread Douglas Guptill
Advanced", then "down arrow" to "CPU configuration", I found a setting called "Intel (R) HT Technology". The help dialogue says "When Disabled only one thread per core is enabled". Mine is "Enabled", and I see 8 cpus. The Core i7,

Re: [OMPI users] open-mpi behaviour on Fedora, Ubuntu, Debian and CentOS

2010-04-28 Thread Douglas Guptill
Hello Gus: Thannk you for your excellent and well-considered thoughts on the subject. You educate us all. Douglas. On Wed, Apr 28, 2010 at 02:39:20PM -0400, Gus Correa wrote: > Hi Asad > > I think the speed vs. accuracy tradeoff will always be there. > Getting both at the same time is kind of a

Re: [OMPI users] MPI_Comm_accept() busy waiting?

2010-03-09 Thread Douglas Guptill
m using Ubuntu 9.10's default OpenMPI deb package. >> Its version is 1.3.2. >> >> Regards >> >> Ramon. > ___________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > --

Re: [OMPI users] openmpi fails to terminate for errors/signals on some but not all processes

2010-02-10 Thread Douglas Guptill
Hello Lawrence: If I correctly remember your code which created this problem, perhaps you could solve it by using the iostatus parameter: read(unit,*,iostatus=ierror) some_variable if (ierror.ne.0) then c handle error endif Hope that helps, Douglas. On Mon, Feb 08, 2010 at 01:29:38PM

Re: [OMPI users] How to start MPI_Spawn child processes early?

2010-01-27 Thread Douglas Guptill
It sounds to me a bit like asking to be born before your mother. Unless I misunderstand the question... Douglas. On Thu, Jan 28, 2010 at 09:24:29AM +1100, Jaison Paul wrote: > Hi, I am just reposting my early query once again. If anyone one can > give some hint, that would be great. > > Thanks,

Re: [OMPI users] Mimicking timeout for MPI_Wait

2009-12-07 Thread Douglas Guptill
even one task is sharing its CPU with > other processes, like users doing compiles, the whole job slows down > too much. I have not found that to be the case. Regards, Douglas. -- Douglas Guptill voice: 902-461-9749 Research Assistant, LSC 4640 ema

Re: [OMPI users] Mimicking timeout for MPI_Wait

2009-12-06 Thread Douglas Guptill
On Sun, Dec 06, 2009 at 02:29:01PM +0200, Katz, Jacob wrote: > Thanks. > Yes, I meant in the question that I was looking for something creative, both > fast responding and not using 100% CPU all the time. > I guess I’m not the first one to face this question. Have anyone done > anything “better”

Re: [OMPI users] Release date for 1.3.4?

2009-11-12 Thread Douglas Guptill
Hello Eugene: On Thu, Nov 12, 2009 at 07:20:08AM -0800, Eugene Loh wrote: > Jeff Squyres wrote: > >> I think Eugene will have to answer this one -- Eugeue? >> >> On Nov 12, 2009, at 6:35 AM, John R. Cary wrote: >> >>> From http://svn.open-mpi.org/svn/ompi/branches/v1.3/NEWS I see: >>> >>> - Many u

Re: [OMPI users] mpirun example program fail on multiple nodes - unable to launch specified application on client node

2009-11-05 Thread Douglas Guptill
On Thu, Nov 05, 2009 at 03:15:33PM -0600, Qing Pang wrote: > Thank you Jeff! That solves the problem. :-) You are the lifesaver! > So does that means I always need to copy my application to all the > nodes? Or should I give the pathname of the my executable in a different > way to avoid this?

Re: [OMPI users] Mac OSX 10.6 (SL) + openMPI 1.3.3 + Intel Compilers 11.1.058 => Segmentation fault

2009-09-08 Thread Douglas Guptill
On Tue, Sep 08, 2009 at 08:32:47AM -0700, Warner Yuen wrote: > I also had the same problem with IFORT and ICC with OMPI-1.33 on Mac OS X > v10.6. However, I was successfully able to use 10.6 Server with IFORT > 11.1.058 and GCC. That is an interesting result, in light of question #14 of: http

[OMPI users] 100% CPU doing nothing!?

2009-04-22 Thread Douglas Guptill
Hi Ross: On Tue, Apr 21, 2009 at 07:19:53PM -0700, Ross Boylan wrote: > I'm using Rmpi (a pretty thin wrapper around MPI for R) on Debian Lenny > (amd64). My set up has a central calculator and a bunch of slaves to > wich work is distributed. > > The slaves wait like this: > mpi.send(as.

Re: [OMPI users] Intel compiler libraries (was: libnuma issue)

2009-04-16 Thread Douglas Guptill
On Thu, Apr 16, 2009 at 05:29:14PM +0200, Francesco Pietra wrote: > On Thu, Apr 16, 2009 at 3:04 PM, Jeff Squyres wrote: ... > Given my inexperience as system analyzer, I assume that I am messing > something. Unfortunately, i was unable to discover where I am messing. > An editor is waiting comple

Re: [OMPI users] Open MPI 2009 released

2009-04-02 Thread Douglas Guptill
On Wed, Apr 01, 2009 at 06:04:15PM -0400, George Bosilca wrote: > The Open MPI Team, representing a consortium of bailed-out banks, car > manufacturers, and insurance companies, is pleased to announce the > release of the "unbreakable" / bug-free version Open MPI 2009, > (expected to be available

Re: [OMPI users] threading bug?

2009-03-06 Thread Douglas Guptill
I once had a crash in libpthread something like the one below. The very un-obvious cause was a stack overflow on subroutine entry - large automatic array. HTH, Douglas. On Wed, Mar 04, 2009 at 03:04:20PM -0500, Jeff Squyres wrote: > On Feb 27, 2009, at 1:56 PM, Mahmoud Payami wrote: > > >I am u

Re: [OMPI users] valgrind problems

2009-02-27 Thread Douglas Guptill
On Thu, Feb 26, 2009 at 08:27:15PM -0700, Justin wrote: > Also the stable version of openmpi on Debian is 1.2.7rc2. Are there any > known issues with this version and valgrid? For a now-forgotten reason, I ditched the openmpi that comes on Debian etch, and installed 1.2.8 in /usr/local. HTH, Do

Re: [OMPI users] Supporting OpenMPI compiled for multiple compilers

2009-02-10 Thread Douglas Guptill
Hello Prentice: On Tue, Feb 10, 2009 at 12:04:47PM -0500, Prentice Bisbal wrote: > I need to support multiple compilers: Portland, Intel and GCC, so I've > been compiling OpenMPI with each compiler, to avoid the Fortran symbol > naming problems. When compiling, I'd use the --prefix and -exec-prefi

Re: [OMPI users] Handling output of processes

2009-01-26 Thread Douglas Guptill
Hello Ralph: Please forgive if this has already been covered... Have you considered prefixing each line of output from each process with something like "process_number" and a colon? That is what IBM's poe does. Separating the output is then easy: cat file | grep 0: > file.0 cat file | grep

Re: [OMPI users] Problem compiling open mpi 1.3 with sunstudio12 express

2009-01-19 Thread Douglas Guptill
When I use the Intel compilers, I have to add to my PATH and LD_LIBRARY_PATH before using "mpif90". I wonder if this needs to be done in your case? Douglas. On Mon, Jan 19, 2009 at 05:49:53PM +0100, Olivier Marsden wrote: > Hello, > > I'm trying to compile ompi 1.3rc7 with the sun studio expres

Re: [OMPI users] trouble using --mca mpi_yield_when_idle 1

2008-12-18 Thread douglas . guptill
second, and doubles after each sleep up to a maximum of 100 milliseconds. Interestingly, when I left the sleep time at a constant 1 millisecond, the run load went up significantly; it varied over the range 1.3 -> 1.7 . I have attached my MPI_Send.c and MPI_Recv.c . Comments welcome and app

Re: [OMPI users] trouble using --mca mpi_yield_when_idle 1

2008-12-12 Thread douglas . guptill
ith the blocking feature you describe, I could double the number of number-cruncher jobs running at one time, thus doubling throughput. Regards, Douglas. -- Douglas Guptill Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca Oceanography Department fax: 902-494-3877 Dalhousie University Halifax, NS, B3H 4J1, Canada

Re: [OMPI users] trouble using --mca mpi_yield_when_idle 1

2008-12-12 Thread douglas . guptill
d". "mpi_send", according to my understanding of the MPI standard, may not exit until a matching "mpi_recv" has been initiated, or completed. At least that is the conclusion I came to. However my complaint - sorry, I wish I could think of a better word - remains. It appe

Re: [OMPI users] trouble using --mca mpi_yield_when_idle 1

2008-12-08 Thread Douglas Guptill
On Mon, Dec 08, 2008 at 08:56:59PM +1100, Terry Frankcombe wrote: > As Eugene said: Why are you desperate for an idle CPU? So I can run another job. :-) Douglas. -- Douglas Guptill Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca Oceanogra

Re: [OMPI users] trouble using --mca mpi_yield_when_idle 1

2008-12-08 Thread douglas . guptill
Hello Eugene: On Sun, Dec 07, 2008 at 11:15:21PM -0800, Eugene Loh wrote: > Douglas Guptill wrote: > > >Hi: > > > >I am using openmpi-1.2.8 to run a 2 processor job on an Intel > >Quad-core cpu. Opsys is Debian etch. I am reaonably sure that, most > >of

[OMPI users] trouble using --mca mpi_yield_when_idle 1

2008-12-06 Thread Douglas Guptill
nmpi-intel-noopt And still get, for each run, two cpus are at 100%. My goal is to get the system to a minimum usage state, where only one cpu is being used, if one process is waiting for results from the other. Can anyone suggest if this is possible, and if so, how? Thanks, Douglas. -- Dougl