On 5/16/2013 2:16 PM, Geraldine Hochman-Klarenberg wrote:
Maybe I should add that my Intel C++ and Fortran compilers are
different versions. C++ is 12.0.2 and Fortran is 13.0.2. Could that be
an issue? Also, when I check for the location of ifort, it seems to be
in usr/bin - which is different
FWIW: my Mac is running 10.8.3 and works fine - though the xcode reqt is quite
true.
On May 16, 2013, at 4:14 PM, Gus Correa wrote:
> Hi Geraldine
>
> I haven't had much luck with OpenMPI 1.6.4 on a Mac OS X.
> OMPI 1.6.4 built with gcc (no Fortran), but it would have
> memory problems at run
Hi Geraldine
I haven't had much luck with OpenMPI 1.6.4 on a Mac OS X.
OMPI 1.6.4 built with gcc (no Fortran), but it would have
memory problems at runtime.
However, my Mac is much older than yours (OS X 10.6.8) and 32 bit,
not a good comparison.
In any case, take my suggestions with a grain of s
Dear Jeff::
Many thanks for your kind response. Since before changing to
version 1.6.4, I had installed (via "apt-get install") a 1.5 version
of openmpi, I also was suspicious that ompi_info was referring
to remnants of this old mpi version, though I did my best of
removing it. Nonetheless, when c
Maybe I should add that my Intel C++ and Fortran compilers are different
versions. C++ is 12.0.2 and Fortran is 13.0.2. Could that be an issue? Also,
when I check for the location of ifort, it seems to be in usr/bin - which is
different than the C compiler (even though I have folders
/opt/intel
On May 16, 2013, at 12:30 PM, Ralph Castain wrote:
>> "... when the time comes to patch or otherwise upgrade LAM, ..."
>
> lol - fixed. thx!
That is absolutely hilarious -- it's been there for *years*...
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cis
Hi Qamar
I don't have a cluster as large as yours,
but I know Torque requires special settings for large
clusters:
http://www.clusterresources.com/torquedocs21/a.flargeclusters.shtml
My tm_.h (Torque 2.4.11) says:
#define TM_ESYSTEM 17000
#define TM_ENOEVENT 17001
#define TM_ENOTCONNECTED 17
On May 16, 2013, at 9:18 AM, Gus Correa wrote:
> With the minor caveat that a sentence in the link below
> still points to o'l LAM. :)
>
> "... when the time comes to patch or otherwise upgrade LAM, ..."
lol - fixed. thx!
>
> If you have a NFS shared file system,
> if the architecture and O
Check the torque error constants - i'm not sure what that value means, but
torque is reporting the error. all we do is print out the value they return if
it is an error
On May 16, 2013, at 9:09 AM, Qamar Nazir wrote:
> Dear Support,
>
> We are having an issue with our OMPI runs. When we run
With the minor caveat that a sentence in the link below
still points to o'l LAM. :)
"... when the time comes to patch or otherwise upgrade LAM, ..."
If you have a NFS shared file system,
if the architecture and OS are the same across the nodes,
if your cluster isn't too big (for NFS latency to
Dear Support,
We are having an issue with our OMPI runs. When we run jobs on <=550
machines (550 x 16 cores) then they work without any problem. As soon as
we run them on 600 or more machines we get the "plm:tm: failed to spawn
daemon, error code = 17000" Error
We are using:
OpenMPI ver: 1.
I am having trouble configuring OpenMPI-1.6.4 with the Intel C/C++ composer
(12.0.2). My OS is OSX 10.7.5.
I am not a computer whizz so I hope I can explain what I did properly:
1) In bash, I did source /opt/intel/bin/compilervars.sh intel64
and then echo PATH showed:
/opt/intel/composerxe-20
See http://www.open-mpi.org/faq/?category=building#where-to-install
On May 16, 2013, at 11:30 AM, Ralph Castain
wrote:
> no, as long as ompi is installed in same location on each machine
>
> On May 16, 2013, at 8:24 AM, Reza Bakhshayeshi wrote:
>
>> Hi
>>
>> Do we need distributed file sys
no, as long as ompi is installed in same location on each machine
On May 16, 2013, at 8:24 AM, Reza Bakhshayeshi wrote:
> Hi
>
> Do we need distributed file system (like NFS) when running MPI program on
> multiple machines?
>
> thanks,
> Reza
> ___
Hi
Do we need distributed file system (like NFS) when running MPI program on
multiple machines?
thanks,
Reza
Following is result of mpirun ompi_info on three_nodes.
three nodes version is same.
Package: Open MPI root@bwhead.clnet Distribution Open MPI root@bwslv01
Distribution Open MPI root@bwslv02 Distribution
Open MPI: 1.6.4 1.6.4 1.6.4
Open MPI SVN revision: r28081 r28081 r28081
Open MPI rel
Looking at your config.log, it looks like OMPI correctly determined that the
Fortran INTEGER size is 8. I see statements like this:
#define OMPI_SIZEOF_FORTRAN_INTEGER 8
Are you sure that you're running the ompi_info that you just installed? Can
you double check to see that there's not anothe
17 matches
Mail list logo