9.1.045) and gcc 4.1.0 20060304 (aka Red Hat 4.1.0-3). I
have also tried earlier versions of OpenMPI and found the same bug
(1.1.2 and 1.2.2).
Using -verbose didn't provide any additional output. I'm happy to help
tracking down whatever is causing this.
Many thanks,
Barry Rou
On Sun, Jan 13, 2008 at 09:54:47AM -0500, Barry Rountree wrote:
> Hello,
>
> The following command
>
> mpirun -np 2 -hostfile ~/hostfile uptime
>
> will occasionally hang after completing. The expected output appears on
> the screen, but mpirun needs a SIGKILL
d ompi directory.
>
> Any assistance on this matter would be appreciated,
>
> Mark E. Kosmowski
I'd posted a message earlier about intermittent hangs -- perhaps it's
the same issue. If you run a hundred instances or so of "
xed
> all of those on the 1.2 branch, but perhaps there's some other weird
> race condition happening that doesn't happen on our test machines...
I'm happy to help. I've got a paper submission deadline on Tuesday, so
it might not be until midweek.
Thanks for the rep
d at the end of time (look for
> function names like iof_flush or similar). We thought we had fixed
> all of those on the 1.2 branch, but perhaps there's some other weird
> race condition happening that doesn't happen on our test machines...
>
>
>
> On Jan 1
On Thu, Jan 24, 2008 at 03:01:40AM -0500, Barry Rountree wrote:
> On Fri, Jan 18, 2008 at 08:33:10PM -0500, Jeff Squyres wrote:
> > Barry --
> >
> > Could you check what apps are still running when it hangs? I.e., I
> > assume that all the uptime's are dead;
ave threading support actually working.
Well, I'm happy the problem is obvious and the workaround is easy. I'll
compile that version tonight and try it out when I get some time on the
cluster tomorrow.
Thanks for the help!
Barry
>
> On Jan 24, 2008 3:25 AM, Barry Rountree wrote
On Thu, Jan 24, 2008 at 10:09:51PM -0500, Barry Rountree wrote:
> On Thu, Jan 24, 2008 at 04:03:49PM -0500, Tim Mattox wrote:
> > Hello Barry,
> > I am guessing you are trying to use a threaded build of Open MPI...
> >
> > Unfortunately, the threading support in Ope
or energy savings. Are you
volunteering to test a patch? (I've got four other papers I need to
get finished up, so it'll be a few weeks before I start coding.)
Barry Rountree
Ph.D. Candidate, Computer Science
University of Georgia
>
>
>
>
> Jeff Squyres schrieb:
>
milliseconds, and then use some other method that
> sleeps not for a fixed time, but until new messages arrive.
Well, it sounds like you can get to this before I can. Post your patch
here and I'll test it on the NAS suite, UMT2K, Paradis, and a few
synthetic benchmarks I've written.
On Thu, Apr 24, 2008 at 11:17:30AM -0400, George Bosilca wrote:
> Well, blocking or not blocking this is the question !!! Unfortunately, it's
> more complex than this thread seems to indicate. It's not that we didn't
> want to implement it in Open MPI, it's that at one point we had to make a
> c
On Wed, May 07, 2008 at 12:33:59PM -0400, Alberto Giannetti wrote:
> I need to log application-level messages on disk to trace my program
> activity. For better performances, one solution is to dedicate one
> processor to the actual I/O logging, while the other working
> processors would trac
On Wed, May 07, 2008 at 01:51:03PM -0400, Alberto Giannetti wrote:
>
> On May 7, 2008, at 1:32 PM, Barry Rountree wrote:
>
> > On Wed, May 07, 2008 at 12:33:59PM -0400, Alberto Giannetti wrote:
> >> I need to log application-level messages on disk to trace my program
&
On Wed, May 07, 2008 at 03:47:24PM -0400, Sang Chul Choi wrote:
> Hi,
>
> I tried to run a hello world example of mpi code. It just hangs at
> MPI_Init. My machine is installed with Ubuntu linux and I install some
> package of open mpi. Compiling was okay, but running the code hangs at
> MPI_Init.
would accomplish
this, but that's didn't work on this system.
What's the correct way of doing this?
Thanks much,
Barry Rountree
University of Georgia
OpenMPI 2.8 Linux+gcc, self+tcp.
Relevant bit of the hostfile looks like:
opt00 slots=1 max_slots=1
opt01 slo
would accomplish
this, but that's didn't work on this system.
What's the correct way of doing this?
Thanks much,
Barry Rountree
University of Georgia
OpenMPI 2.8 Linux+gcc, self+tcp.
Relevant bit of the hostfile looks like:
opt00 slots=1 max_slots=1
opt01 slo
Original message
>Date: Mon, 8 Dec 2008 11:47:19 -0500
>From: George Bosilca
>Subject: Re: [OMPI users] How to force eager behavior during Isend?
>To: Open MPI Users
>
>Barry,
>
>These values are used deep inside the Open MPI library, in order to
>define how we handle the messag
On Monday 08 December 2008 02:44:42 pm George Bosilca wrote:
> Barry,
>
> If you set the eager size large enough, the isend will not return
> until the data is pushed into the network layer.
That's exactly what I want it to do -- good. I've set the eagerness to 2MB,
but for messages 64k and up,
18 matches
Mail list logo