Hi Derek
Cole, Derek E wrote:
Thanks for the ideas.
I did finally end up getting this working by sending back to
the master process. It's quite ugly, and added a good bit of
MPI to the code, but it works for now,
and I will revisit this later.
Is the MPI code uglier than the OO-stuff you men
Thank you, Ralph.
I have read the wiki and the man pages. But I am still not sure I
understand what is going on in my example. I cannot filter the slots
allocated by SGE. I also think that there is a deviation from the
behavior described on the wiki (precisely example 5 from the top in
sectio
Jeff,
In my case, it was the firewall. It was restricting communication to
ssh only between the compute nodes. I appreciate the help.
Rob
Jeff Squyres (jsquyres) wrote:
Those are normal ssh messages, I think - an ssh session may try
mulktiple auth methods before one succeeds.
You're abs
Thanks for the ideas. I did finally end up getting this working by sending back
to the master process. It's quite ugly, and added a good bit of MPI to the
code, but it works for now, and I will revisit this later. I am not sure what
the file system is, I think it is XFS, but I don't know much ab
I should have read your original note more closely and I would have spotted the
issue. How a hostfile is used changed between OMPI 1.2 and the 1.3 (and above)
releases per user requests. It was actually the SGE side of the community that
led the change :-)
You can get a full description of how
>> However, there are cases when being able to specify the hostfile is
>> important (hybrid jobs, users with MPICH jobs, etc.).
>[I don't understand what MPICH has to do with it.]
This was just an example of how the different behavior of OMPI 1.4 may
cause problems. The MPICH library is not the
Serge writes:
> However, there are cases when being able to specify the hostfile is
> important (hybrid jobs, users with MPICH jobs, etc.).
[I don't understand what MPICH has to do with it.]
> For example,
> with Grid Engine I can request four 4-core nodes, that is total of 16
> slots. But I al
Thank's for your answer and sorry for misunderstanding.
My question is about performance of this implementation regarding the use
of multithreading capabilities.
Of course MPI+OpenMP is a choice but (for what I understand) you should
obtain a lot of performances also using MPI multithreading
On 4/7/2010 1:20 AM, Piero Lanucara wrote:
Dear OpenMPI team
hiw much performances we should expect using MPI multithread
capability (MPI_init_thread in multiple format).
It seems that no performance exist using some simple test like
multiple mpi channel activated, overlapping comm and computa
> If you run your cmd with the hostfile option and add
> --display-allocation, what does it say?
Thank you, Ralph.
This is the command I used inside my submission script:
mpirun --display-allocation -np 4 -hostfile hosts ./program
And this is the output I got.
Data for node: Name: node03
Dear OpenMPI team
hiw much performances we should expect using MPI multithread capability
(MPI_init_thread in multiple format).
It seems that no performance exist using some simple test like multiple mpi
channel activated, overlapping comm and computation and so on
Thank's in advance
Piero
Indeed, it seems that it addresses what I want!
I read the discussions on the MPI Forum list, which is very interesting.
I began to develop a terminaison code before seeing that the use of
MPI_Abort() should be sufficient.
But I didn't post anything, since my case is particular: I have iterative
12 matches
Mail list logo