MK,
Hmm.. What if you put CC=/usr/local/intel/Compiler/11.0/083/bin/intel64/icc
on the build line.
Joe
From: users-boun...@open-mpi.org on behalf of Michael Kuklik
Sent: Wed 5/27/2009 5:05 PM
To: us...@open-mpi.org
Subject: Re: [OMPI users] problem with inst
Joe
'which icc' returns the path to icc
/usr/local/intel/Compiler/11.0/083/bin/intel64/icc
and I used the env variable script provided by intel.
so my shell env is ok and I think libtool should inherit my shell environment
just in case I send you the env printout
MKLROOT=/usr/local/intel/Compil
Hello,
I have the following error when I run a job:
It seems that there is no lamd running on the host cbuach.
This indicates that the LAM/MPI runtime environment is not operating.
The LAM/MPI runtime environment is necessary for MPI programs to run
(the MPI program tired to invoke the "MPI_Init
I just replied to a separate email about the same issue: are you sure
that icc is in the path of the shell where you invoked "make install"?
It may be that you build OMPI in a shell that had icc setup properly
in your path, but then invoked "make install" from a shell that did
not have icc
I think Joe's question is spot on target: according to your output,
you build OMPI just fine with icc, but then the "make install" may
have been issued from a different shell where icc was not in your path.
On May 26, 2009, at 10:51 PM, Joe Griffin wrote:
MK,
Is "icc" in your path?
What i
George Bosilca wrote:
This is a problem of numerical stability, and there is no solution
for such a problem in MPI. Usually, preconditioning the input
matrix improve the numerical stability.
At the level of this particular e-mail thread, the issue seems to me to
be different. Results are
I've seen this behaviour with MUMPS on shared-memory machines as well
using MPI. I use the iterative refinement capability to sharpen the
last few digits of the solution ( 2 or 3 iterations is usually enough).
If you're not using that, give it a try, it will probably reduce the
noise you're g
This is a problem of numerical stability, and there is no solution for
such a problem in MPI. Usually, preconditioning the input matrix
improve the numerical stability.
If you read the MPI standard, there is a __short__ section about what
guarantees the MPI collective communications provid
vasilis wrote:
Rank 0 accumulates all the res_cpu values into a single array, res. It
starts with its own res_cpu and then adds all other processes. When
np=2, that means the order is prescribed. When np>2, the order is no
longer prescribed and some floating-point rounding variations
> Rank 0 accumulates all the res_cpu values into a single array, res. It
> starts with its own res_cpu and then adds all other processes. When
> np=2, that means the order is prescribed. When np>2, the order is no
> longer prescribed and some floating-point rounding variations can start
> to occ
Open MPI considers hosts differently than network links.
So you should only list the actual hostname in the hostfile, with
slots equal to the number of processors (4 in your case, I think?).
Once the MPI processes are launched, they each look around on the host
that they're running and find
vasilis wrote:
Thank you Eugene for your suggestion. I used different tags for each variable,
and now I do not get this error.
The problem now is that I am getting a different solution when I use more than
2 CPUs. I checked the matrices and I found that they differ by a very small
amount of
Thank you Eugene for your suggestion. I used different tags for each variable,
and now I do not get this error.
The problem now is that I am getting a different solution when I use more than
2 CPUs. I checked the matrices and I found that they differ by a very small
amount of the order 10^(-10
13 matches
Mail list logo