Re: [OMPI users] INSTALL bug in 64-bit build of OpenMPI Release build on Windows - has workaround

2010-02-05 Thread Marcus G. Daniels
Shiqing, Damien, > If you already have an x86 solution, and you want to have > another for x64, you have to start over from the CMake-GUI, select the > 64 bit generator, i.e. "Visual Studio 9 2008 64bit", so that to generate > the a new solution in a different directory. That was the source of my

Re: [OMPI users] Anybody built a working 1.4.1 on Solaris 8 (Sparc)?

2010-02-05 Thread Iain Bason
On Feb 4, 2010, at 4:52 PM, David Mathog wrote: Has anybody built 1.4.1 on Solaris 8 (Sparc), because it isn't going very well here. If you succeeded at this please tell me how you did it. Here is my tale of woe. First attempt with gcc (3.4.6 from SunFreeware) and ./configure --with-sge

Re: [OMPI users] Anybody built a working 1.4.1 on Solaris 8 (Sparc)?

2010-02-05 Thread Terry Dontje
We haven't tried Solaris 8 in quite some time. However, for your first issue did you include the --enable-heterogeneous option on your configure command? Since you are mix IA-32 and SPARC nodes you'll want to include this so the endian issue doesn't bite you. --td Message: 5 Date: Thu, 04

[OMPI users] Infiniband Question

2010-02-05 Thread Mike Hanby
Howdy, When running a Gromacs job using OpenMPI 1.4.1 on Infiniband enabled nodes, I'm seeing the following process listing: \_ -bash /opt/gridengine/default/spool/compute-0-3/job_scripts/97037 \_ mpirun -np 4 mdrun_mpi -v -np 4 -s production-Npt-323K_4CPU -o production-Npt-323K_4CPU -c pro

Re: [OMPI users] Infiniband Question

2010-02-05 Thread Jeff Squyres
Yep -- it's normal. Those IP addresses are used for bootstrapping/startup, not for MPI traffic. In particular, that "HNP URI" stuff is used by Open MPI's underlying run-time environment. It's not used by the MPI layer at all. On Feb 5, 2010, at 2:32 PM, Mike Hanby wrote: > Howdy, > > When

Re: [OMPI users] hostfiles

2010-02-05 Thread Jeff Squyres
On Feb 4, 2010, at 7:55 PM, Ralph Castain wrote: > Take a look at orte/mca/rmaps/seq - you can select it with -mca rmaps seq > > I believe it is documented ...if it isn't, can it be added to the man page? It might be a common mpirun / hostfile question...? -- Jeff Squyres jsquy...@cisco.com

Re: [OMPI users] DMTCP: Checkpoint-Restart solution for OpenMPI

2010-02-05 Thread Jeff Squyres
On Jan 31, 2010, at 10:39 PM, Kapil Arya wrote: > DMTCP also supports a dmtcpaware interface (application-initiated > checkpoints), and numerous other features. At this time, DMTCP > supports only the use of Ethernet (TCP/IP) and shared memory for > transport. We are looking at supporting the Inf

Re: [OMPI users] Anybody built a working 1.4.1 on Solaris 8 (Sparc)?

2010-02-05 Thread David Mathog
Terry Dontje wrote > We haven't tried Solaris 8 in quite some time. However, for your first > issue did you include the --enable-heterogeneous option on your > configure command? > > Since you are mix IA-32 and SPARC nodes you'll want to include this so > the endian issue doesn't bite you.

Re: [OMPI users] DMTCP: Checkpoint-Restart solution for OpenMPI

2010-02-05 Thread Jeff Squyres
On Feb 5, 2010, at 6:40 PM, Gene Cooperman wrote: > You're correct that we take a virtualized approach by intercepting network > calls, etc. However, we purposely never intercept any frequently > called system calls. So, for example, we never intercept a call > to read() or to write() in TCP/IP,