be glad to see more emphasis on
stability in OpenMPI (where stability means absence of bugs) than on
new features. I am still using OpenMPI-1.2.8
Just my $0.02,
Douglas.
--
Douglas Guptill voice: 902-461-9749
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanography Department fax: 902-494-3877
Dalhousie University
Halifax, NS, B3H 4J1, Canada
on to this, see
http://www.open-mpi.org/community/lists/users/2010/07/13731.php
HTH,
Douglas.
--
Douglas Guptill voice: 902-461-9749
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanography Department fax: 902-494-3877
Dalhousie University
Halifax, NS, B3H 4J1, Canada
1.0
I use these with OpenMPI-1.2.8.
I have not tried -mca yield_when_idle 1; which may not be in 1.2.8.
Not sure.
Hope that helps
Douglas.
--
Douglas Guptill voice: 902-461-9749
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanography Departm
On Wed, Jul 14, 2010 at 04:27:11PM -0400, Jeff Squyres wrote:
> On Jul 9, 2010, at 12:43 PM, Douglas Guptill wrote:
>
> > After some lurking and reading, I plan this:
> > Debian (lenny)
> > + fai - for compute-node operating system install
> >
On Thu, Jul 08, 2010 at 09:43:48AM -0400, Gus Correa wrote:
> Douglas Guptill wrote:
>> On Wed, Jul 07, 2010 at 12:37:54PM -0600, Ralph Castain wrote:
>>
>>> Noafraid not. Things work pretty well, but there are places
>>> where things just don't mesh
uld I be looking at Torque instead for a queue manager?
Suggestions appreciated,
Douglas.
--
Douglas Guptill voice: 902-461-9749
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanography Department fax: 902-494-3877
Dalhousie
with.
I have been tempted to try and duplicate your problem. Would that be a
helpful experiment? gcc, OpenMPI 1.4.1, IIRC ?
Regards,
Douglas.
--
Douglas Guptill voice: 902-461-9749
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanography D
and "data loss" for 1.3.x, I put aside thoughts of
upgrading.
--
Douglas Guptill voice: 902-461-9749
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanography Department fax: 902-494-3877
Dalhousie University
Halifax, NS, B3H 4J1, Canada
Advanced", then "down arrow" to "CPU configuration", I found a
setting called "Intel (R) HT Technology". The help dialogue says
"When Disabled only one thread per core is enabled".
Mine is "Enabled", and I see 8 cpus. The Core i7,
Hello Gus:
Thannk you for your excellent and well-considered thoughts on the
subject. You educate us all.
Douglas.
On Wed, Apr 28, 2010 at 02:39:20PM -0400, Gus Correa wrote:
> Hi Asad
>
> I think the speed vs. accuracy tradeoff will always be there.
> Getting both at the same time is kind of a
m using Ubuntu 9.10's default OpenMPI deb package.
>> Its version is 1.3.2.
>>
>> Regards
>>
>> Ramon.
> ___________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Hello Lawrence:
If I correctly remember your code which created this problem, perhaps
you could solve it by using the iostatus parameter:
read(unit,*,iostatus=ierror) some_variable
if (ierror.ne.0) then
c handle error
endif
Hope that helps,
Douglas.
On Mon, Feb 08, 2010 at 01:29:38PM
It sounds to me a bit like asking to be born before your mother.
Unless I misunderstand the question...
Douglas.
On Thu, Jan 28, 2010 at 09:24:29AM +1100, Jaison Paul wrote:
> Hi, I am just reposting my early query once again. If anyone one can
> give some hint, that would be great.
>
> Thanks,
even one task is sharing its CPU with
> other processes, like users doing compiles, the whole job slows down
> too much.
I have not found that to be the case.
Regards,
Douglas.
--
Douglas Guptill voice: 902-461-9749
Research Assistant, LSC 4640 ema
On Sun, Dec 06, 2009 at 02:29:01PM +0200, Katz, Jacob wrote:
> Thanks.
> Yes, I meant in the question that I was looking for something creative, both
> fast responding and not using 100% CPU all the time.
> I guess I’m not the first one to face this question. Have anyone done
> anything “better”
Hello Eugene:
On Thu, Nov 12, 2009 at 07:20:08AM -0800, Eugene Loh wrote:
> Jeff Squyres wrote:
>
>> I think Eugene will have to answer this one -- Eugeue?
>>
>> On Nov 12, 2009, at 6:35 AM, John R. Cary wrote:
>>
>>> From http://svn.open-mpi.org/svn/ompi/branches/v1.3/NEWS I see:
>>>
>>> - Many u
On Thu, Nov 05, 2009 at 03:15:33PM -0600, Qing Pang wrote:
> Thank you Jeff! That solves the problem. :-) You are the lifesaver!
> So does that means I always need to copy my application to all the
> nodes? Or should I give the pathname of the my executable in a different
> way to avoid this?
On Tue, Sep 08, 2009 at 08:32:47AM -0700, Warner Yuen wrote:
> I also had the same problem with IFORT and ICC with OMPI-1.33 on Mac OS X
> v10.6. However, I was successfully able to use 10.6 Server with IFORT
> 11.1.058 and GCC.
That is an interesting result, in light of question #14 of:
http
Hi Ross:
On Tue, Apr 21, 2009 at 07:19:53PM -0700, Ross Boylan wrote:
> I'm using Rmpi (a pretty thin wrapper around MPI for R) on Debian Lenny
> (amd64). My set up has a central calculator and a bunch of slaves to
> wich work is distributed.
>
> The slaves wait like this:
> mpi.send(as.
On Thu, Apr 16, 2009 at 05:29:14PM +0200, Francesco Pietra wrote:
> On Thu, Apr 16, 2009 at 3:04 PM, Jeff Squyres wrote:
...
> Given my inexperience as system analyzer, I assume that I am messing
> something. Unfortunately, i was unable to discover where I am messing.
> An editor is waiting comple
On Wed, Apr 01, 2009 at 06:04:15PM -0400, George Bosilca wrote:
> The Open MPI Team, representing a consortium of bailed-out banks, car
> manufacturers, and insurance companies, is pleased to announce the
> release of the "unbreakable" / bug-free version Open MPI 2009,
> (expected to be available
I once had a crash in libpthread something like the one below. The
very un-obvious cause was a stack overflow on subroutine entry - large
automatic array.
HTH,
Douglas.
On Wed, Mar 04, 2009 at 03:04:20PM -0500, Jeff Squyres wrote:
> On Feb 27, 2009, at 1:56 PM, Mahmoud Payami wrote:
>
> >I am u
On Thu, Feb 26, 2009 at 08:27:15PM -0700, Justin wrote:
> Also the stable version of openmpi on Debian is 1.2.7rc2. Are there any
> known issues with this version and valgrid?
For a now-forgotten reason, I ditched the openmpi that comes on Debian
etch, and installed 1.2.8 in /usr/local.
HTH,
Do
Hello Prentice:
On Tue, Feb 10, 2009 at 12:04:47PM -0500, Prentice Bisbal wrote:
> I need to support multiple compilers: Portland, Intel and GCC, so I've
> been compiling OpenMPI with each compiler, to avoid the Fortran symbol
> naming problems. When compiling, I'd use the --prefix and -exec-prefi
Hello Ralph:
Please forgive if this has already been covered...
Have you considered prefixing each line of output from each process
with something like "process_number" and a colon?
That is what IBM's poe does. Separating the output is then easy:
cat file | grep 0: > file.0
cat file | grep
When I use the Intel compilers, I have to add to my PATH and
LD_LIBRARY_PATH before using "mpif90". I wonder if this needs to be
done in your case?
Douglas.
On Mon, Jan 19, 2009 at 05:49:53PM +0100, Olivier Marsden wrote:
> Hello,
>
> I'm trying to compile ompi 1.3rc7 with the sun studio expres
second, and doubles after each sleep
up to a maximum of 100 milliseconds. Interestingly, when I left the
sleep time at a constant 1 millisecond, the run load went up
significantly; it varied over the range 1.3 -> 1.7 .
I have attached my MPI_Send.c and MPI_Recv.c . Comments welcome and
app
ith the blocking feature you describe, I could double the number of
number-cruncher jobs running at one time, thus doubling throughput.
Regards,
Douglas.
--
Douglas Guptill
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanography Department fax: 902-494-3877
Dalhousie University
Halifax, NS, B3H 4J1, Canada
d". "mpi_send", according to my understanding of the MPI
standard, may not exit until a matching "mpi_recv" has been initiated,
or completed. At least that is the conclusion I came to.
However my complaint - sorry, I wish I could think of a better word -
remains. It appe
On Mon, Dec 08, 2008 at 08:56:59PM +1100, Terry Frankcombe wrote:
> As Eugene said: Why are you desperate for an idle CPU?
So I can run another job. :-)
Douglas.
--
Douglas Guptill
Research Assistant, LSC 4640 email: douglas.gupt...@dal.ca
Oceanogra
Hello Eugene:
On Sun, Dec 07, 2008 at 11:15:21PM -0800, Eugene Loh wrote:
> Douglas Guptill wrote:
>
> >Hi:
> >
> >I am using openmpi-1.2.8 to run a 2 processor job on an Intel
> >Quad-core cpu. Opsys is Debian etch. I am reaonably sure that, most
> >of
nmpi-intel-noopt
And still get, for each run, two cpus are at 100%.
My goal is to get the system to a minimum usage state, where only one
cpu is being used, if one process is waiting for results from the
other.
Can anyone suggest if this is possible, and if so, how?
Thanks,
Douglas.
--
Dougl
32 matches
Mail list logo