On Thu, 11 Mar 2010 12:44:01 -0500, "Cole, Derek E"
wrote:
> I am replying to this via the daily-digest message I got. Sorry it
> wasn't sooner... I didn't realize I was getting replies until I got
> the digest. Does anyone know how to change it so I get the emails as
> you all send them?
Log in
We rely heavily on OpenMPI's ability to 'fall though' to the next best
option on our cluster. Example we have some IB (verbs nodes) and most
have TCP.
Recently we added some qlogic IB that uses PSM to get good
performance. We built OpenMPI to include PSM in addition to verbs,
and TCP. W
I am replying to this via the daily-digest message I got. Sorry it wasn't
sooner... I didn't realize I was getting replies until I got the digest. Does
anyone know how to change it so I get the emails as you all send them?
>>Unless your computation is so "embarrassingly parallel" that each proc
perfect.. that is exactly what I wanted to know.. that is was an issue with
the program- rather than an issue with openmpi..
Thanks, Jeff.
Matt
_
Matthew MacManes
PhD Candidate
University of California- Berkeley
Museum of Vertebrate Zoology
Phone: 510-495-5833
Lab W
Debugging this is probably not going to be within the scope of Open MPI -- it
looks like your app is seg faulting inside some routine called DoCharset. If
you're getting corefiles, you might try loading them up in the debugger and see
what is going wrong, etc. I.e., standard debugging rules ap
I "unlimited" my stack space- got a different error, which maybe is a clue..
Im not sure how to vary the rank, like you suggested, so if you have a tip
that would be great.
Here is the new error:
[macmanes:05298] *** Process received signal ***
[macmanes:05298] Signal: Segmentation fault (11)
[mac
2010/3/11 Ralph Castain
> Yeah, it was a bug in the parser - fix scheduled for 1.4.2 release.
>
> Thanks!
> Ralph
>
OK, thanks Ralph for the test and the quick analyse.
Regards,
Olivier
>
> On Mar 11, 2010, at 4:32 AM, Olivier Riff wrote:
>
> Hello Ralph,
>
> Thanks for you quick reply.
>
On Thursday 11 March 2010, Matthew MacManes wrote:
> Can anybody tell me if this is an error associated with openmpi, versus an
> issue with the program I am running (MRBAYES,
> https://sourceforge.net/projects/mrbayes/)
>
> We are trying to run a large simulated dataset using 1,000,000 bases
> div
Yeah, it was a bug in the parser - fix scheduled for 1.4.2 release.
Thanks!
Ralph
On Mar 11, 2010, at 4:32 AM, Olivier Riff wrote:
> Hello Ralph,
>
> Thanks for you quick reply.
> Sorry I did not mention the version : it is the v1.4 (which indeed is not the
> very last one).
> I will appreciat
Can anybody tell me if this is an error associated with openmpi, versus an
issue with the program I am running (MRBAYES,
https://sourceforge.net/projects/mrbayes/)
We are trying to run a large simulated dataset using 1,000,000 bases divided
up into 1000 genes, 5 taxa.. An error is occurring, but w
On Mar 10, 2010, at 9:25 PM, abc def wrote:
> --
> Fatal error in MPI_Comm_spawn: Other MPI error, error stack:
> MPI_Comm_spawn(130).: MPI_Comm_spawn(cmd="./mpitest-2.ex",
> argv=0x75e920, maxprocs=1, info=0x9c00, root=0, MPI_COMM_SELF,
> intercomm=0x7fff
Hi,
Am 11.03.2010 um 03:03 schrieb Brian Smith:
This may seem like an odd query (or not; perhaps it has been
brought up
before). My work recently involves HPC usability i.e. making things
easier for new users by abstracting away the scheduler. I've been
working with DRMAA for interfacing wi
Hello Ralph,
Thanks for you quick reply.
Sorry I did not mention the version : it is the v1.4 (which indeed is not
the very last one).
I will appreciate if you could make a short test.
Thanks and Regards,
Olivier
2010/3/10 Ralph Castain
> Probably a bug - I don't recall if/when anyone actuall
On Wed, 10 Mar 2010 22:25:43 -0500, Gus Correa wrote:
> Ocean dynamics equations, at least in the codes I've seen,
> normally use "pencil" decomposition, and are probably harder to
> handle using 3D "chunk" decomposition (due to the asymmetry imposed by
> gravity).
There is also a lot to be said
Gus Correa wrote:
I am not an MPI expert, just a user/programmer,
so I may be dead wrong in what I wrote.
Please, correct me if I am wrong.
It all sounds not only right to me but also well put.
Thank you very much for the examples.
I tried the following program, based on the guidance here and additional
information I found through google:
---
PROGRAM mpitest1
IMPLICIT none
CHARACTER*6 :: dir
CHARACTER*1 :: crank
! MPI parameters
INCLUDE 'mpif.h'
INTEGER :
16 matches
Mail list logo