Thank you! Mr. Jeff Squyres,
I have conducted a simple MPI_Bcast experiment in out cluster.
The results are shown in the file attached on this e-mail.
The hostfile is :
-
hostname1 slots=4
hostname2 slots=4
hostname3 slots=4
hostname16 slots=4
-
As we can s
vasilis wrote:
On Wednesday 27 of May 2009 8:35:49 pm Eugene Loh wrote:
At the level of this particular e-mail thread, the issue seems to me to
be different. Results are added together in some arbitrary order and
there are variations on order of 10^-10. This is not an issue of
nu
Thank you vey much for your response Gus, unfortunately I was in a trip for
some time thats why I didnt reply immediately. I will try your suggestion.
2009/5/26 Gus Correa
> Hi Fivoskouk
>
> I don't use Ubuntu.
> However, we install OpenMPI from source with no problem on CentOS, Fedora,
> etc. A
MIke,
That may help. It depends on your initialization scripts. Some scripts
could check items and skip sections ( like skipping based on stty).
Anyway, I am glad you are working.
Regards,
Joe
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of M
On May 27, 2009, at 5:45 PM, Dimar Gonzalez wrote:
I have the following error when I run a job:
It seems that there is no lamd running on the host cbuach.
[snip]
I added the environment variables in my .bashrc:
PATH=/usr/local/openmpi-1.3.2/bin:$PATH
export PATH
LD_LIBRARY_PATH=/usr/local/open
On May 28, 2009, at 1:04 AM, Michael Kuklik wrote:
I don't know why I didn't think about it. It works with the whole
path.
I put the intel env script in user .bash_login file.
Do you think I should put the intel env script in the global shell
config file like /etc/profile in order for libto
On Wednesday 27 of May 2009 7:47:06 pm Damien Hocking wrote:
> I've seen this behaviour with MUMPS on shared-memory machines as well
> using MPI. I use the iterative refinement capability to sharpen the
> last few digits of the solution ( 2 or 3 iterations is usually enough).
> If you're not using
On Wednesday 27 of May 2009 8:35:49 pm Eugene Loh wrote:
> George Bosilca wrote:
> > This is a problem of numerical stability, and there is no solution
> > for such a problem in MPI. Usually, preconditioning the input
> > matrix improve the numerical stability.
>
> At the level of this particula
On Wednesday 27 of May 2009 7:47:06 pm Damien Hocking wrote:
> I've seen this behaviour with MUMPS on shared-memory machines as well
> using MPI. I use the iterative refinement capability to sharpen the
> last few digits of the solution ( 2 or 3 iterations is usually enough).
> If you're not using
> This is a problem of numerical stability, and there is no solution for
> such a problem in MPI. Usually, preconditioning the input matrix
> improve the numerical stability.
It could be a numerical stability but this would imply that I have an ill-
conditioned matrix. This is not my case.
> If
Joe,
I don't know why I didn't think about it. It works with the whole path.
I put the intel env script in user .bash_login file.
Do you think I should put the intel env script in the global shell config file
like /etc/profile in order for libtool to see icc?
Thanks for help,
mike
___
11 matches
Mail list logo