Hi Paul
I think you should clarify whether you mean you want you application to
send all it's data back to a particular rank, which then does all IO (in
which case the answer is any MPI implementation can do this... it's a
matter of how you code the app), or if you want the application to know
not
We are eliminating the use of rsh at our company and I'm trying to test out
openmpi with the Nasa Overflow program using ssh.
I've been testing other MPI's (MPICH1 and LAM/MPI) and if I tried to use rsh
the programs would just die when running
using PBS. I submitted my Overflow job using --mca
In some of the testing Eloi did earlier he did disabled eager rdma and
still saw the issue.
--td
Shamis, Pavel wrote:
Terry,
Ishai Rabinovitz is HPC team manager (I added him to CC)
Eloi,
Back to issue. I have seen very similar issue long time ago on some hardware
platforms that support rel
Pasha,
Thanks for your help.
I'm not aware of such memory configuration on the new cluster of our
customer (each computing node is running the Red-Hat 5.x operating
system on Intel X5570 processors).
Anyway, I've already tried to deactivate eager_rdma, but this wouldn't
solve the hdr->tag=0 i
Hi Paul
> Is it possible to configure/run OpenMPI in a such way, that only _one_
> process (e.g. master) performs real disk I/O, and other processes sends the
> data to the master which works as an agent?
It is possible to run OpenMPI this way, but it is not a matter of configuration,
but of impl
Terry,
Ishai Rabinovitz is HPC team manager (I added him to CC)
Eloi,
Back to issue. I have seen very similar issue long time ago on some hardware
platforms that support relaxed ordering memory operations. If I remember
correct it was some IBM platform.
Do you know if relaxed memory ordering is
Pasha, do you by any chance know who at Mellanox might be responsible
for OMPI working?
--td
Eloi Gaudry wrote:
Hi Nysal, Terry,
Thanks for your input on this issue.
I'll follow your advice. Do you know any Mellanox developer I may
discuss with, preferably someone who has spent some time ins
Hi Nysal, Terry,
Thanks for your input on this issue.
I'll follow your advice. Do you know any Mellanox developer I may
discuss with, preferably someone who has spent some time inside the
openib btl ?
Regards,
Eloi
On 29/09/2010 06:01, Nysal Jan wrote:
Hi Eloi,
We discussed this issue durin
Dear OpenMPI developer,
We have a question about the possibility to use MPI IO (and possible
regular I/O) on clusters which does *not* have a common filesystem
(network filesystem) on all nodes.
A common filesystem is mainly NOT a hard precondition to use OpenMPI:
http://www.open-mpi.org/faq/
Hi Eloi,
We discussed this issue during the weekly developer meeting & there were no
further suggestions, apart from checking the driver and firmware levels. The
consensus was that it would be better if you could take this up directly
with your IB vendor.
Regards
--Nysal
On Mon, Sep 27, 2010 at 8
10 matches
Mail list logo