Francesco,
We use modules (http://modules.sourceforge.net/) to manage 14 different
OpenMPI versions on the same cluster, along with their associated
applications. This is a nice way to establish dependancies between apps
and libs and keep things organized.
Good luck.
--andy
$ module avail
Is it possible to have two different compilations of openmpi on the same
machine (dual-opterons, Debian Linux etch)?
On that parallel computer sander.MPI (Amber9) and openmpi 1.2.3 have both been
compiled with Intel Fortran 9.1.036.
Now, I wish to install DOCK6 on this machine and I am advised th
Jeff Squyres wrote:
What name/address do we want? d...@open-mpi.org? (or suggest an
alternative)
Sounds right to me. Only alternative might be docs_t...@open-mpi.org
<>
On Sep 13, 2007, at 4:17 PM, richard.fried...@sun.com wrote:
As more people start chiming in wanting to help with OpenMPI
documentation (a good thing!), maybe we should think about starting
forum or separate email list just for this discussion.
At least, initially to get the ball rolling.
D
As more people start chiming in wanting to help with OpenMPI
documentation (a good thing!), maybe we should think about starting
forum or separate email list just for this discussion.
At least, initially to get the ball rolling.
Do we have the capability of creating a new mail list at open-mpi.
Jeff,
Count us in at the UofA. My initial impressions of Open MPI are very
good and I would be open to contributing to this effort as time allows.
Thanks!
Jeff F. Pummill
Senior Linux Cluster Administrator
University of Arkansas
Fayetteville, Arkansas 72701
(479) 575 - 4590
http://hpc.uark.ed
So there are at least a few people who are interested in this effort
(keep chiming in if you are interested so that we can get a tally of
who would like to be involved).
What kind of resources / organization would be useful for this
group? Indiana University graciously hosts all of Open MP
Jeff,
I would also be interested. I am getting questions from my customers
about the location of documentation.
Thanks,
Pat
Jeff Squyres
On Thu, Sep 13, 2007 at 11:15:47AM -0500, Tim Campbell wrote:
> workstations. When mpirun tries to start the processes on certain
> nodes I get the following error output.
>
> [sr70][0,1,2][btl_tcp_endpoint.c:
> 572:mca_btl_tcp_endpoint_complete_connect] connect() failed with
> errno=111
>
Thanks.
I think I figured out the problem. I found that in my .ssh/
known_hosts there were several "bad" keys associated with some of the
machines in the gridengine pool. My hypothesis is that when mpirun
was establishing the connection topology of the processes there was
some process pa
Hi Tim,
You could try setting -mca pls_gridengine_verbose 1 to show whether SGE
is able to start the ORTE daemons on the remote nodes successfully.
It seems you are having the problem previously asked by another user,
Perhaps you may want to follow this thread and check your ifconfig
setting
Greetings,
I am using OpenMPI v1.2.3 via SGE on a network of amd64
workstations. When mpirun tries to start the processes on certain
nodes I get the following error output.
[sr70][0,1,2][btl_tcp_endpoint.c:
572:mca_btl_tcp_endpoint_complete_connect] connect() failed with
errno=111
[sr71
Hi
I would like to contribute something as well.
I have about half a year of experience with OpenMPI,
and i used LAM MPI for some more than half a year before.
Jody
I would be very happy to help setup a documentation community --
goodness knows we need more/better documentation for Open MPI!
Who else would be interested?
On Sep 13, 2007, at 5:13 AM, Amit Kumar Saha wrote:
Hi Richard,
On 9/12/07, Richard Friedman wrote:
Amit:
Well, as far as I kno
Hi Richard,
On 9/12/07, Richard Friedman wrote:
>
> Amit:
> Well, as far as I know a documentation community within OpenMPI has not yet
> been formed, but maybe it is time to send out a general call to the OpenMPI
> members to see about creating one.
> I'm new to the OpenMPI community myself,
15 matches
Mail list logo