Diego.
Assuming you are not properly coding the solver.
Write a problem so you know the exact solution.
That is know A (a very simple non singular SDP) and x_ , where x_ !=0. Make x
a linear function or a constant so it is super easy to spot where it is
happening the bad x's.
I assume A has the bou
Diego,
your problem might be numerically unstable, that's why results might
differ between one run and an other.
floating point numbers have their own restrictions (rounding errors,
absorption, ...)
are you running single or double precision ?
if you are running single precision, you might gi
> On Oct 28, 2015, at 6:58 PM, Diego Avesani wrote:
>
> dear Damin,
> I wrote the solver by myself. I have not understood your answer.
Floating point addition is not associative. Doing a long sum in different
orders, as might happen when different numbers of nodes do local sums that are
then
Dear Diego,
I will suggest you read the following two. It will give you some good
understanding as to what is happening:
https://en.wikipedia.org/wiki/Butterfly_effect
http://www.amazon.com/The-End-Error-Computing-Computational/dp/1482239868
--Bibrak
On Wed, Oct 28, 2015 at 6:58 PM, Diego Ave
dear Damin,
I wrote the solver by myself. I have not understood your answer.
Diego
On 28 October 2015 at 23:09, Damien wrote:
> Diego,
>
> There aren't many linear solvers that are bit-consistent, where the answer
> is the same no matter how many cores or processes you use. Intel's version
>
Diego,
There aren't many linear solvers that are bit-consistent, where the
answer is the same no matter how many cores or processes you use.
Intel's version of Pardiso is bit-consistent and I think MUMPS 5.0 might
be, but that's all. You should assume your answer will not be exactly
the same
dear Andreas, dear all,
The code is quite long. It is a conjugate gradient algorithm to solve a
complex system.
I have noticed that when a do cycle is small, let's say
do i=1,3
enddo
the results are identical. If the cycle is big, let's say do i=1,20, the
results are different and the difference
On 22:03 Wed 28 Oct , Diego Avesani wrote:
> When I use a single CPU a get a results, when I use 4 CPU I get another
> one. I do not think that very is a bug.
Sounds like a bug to me, most likely in your code.
> Do you think that these small differences are normal?
It depends on what small m
Dear all,
I have problem with my code.
When I use a single CPU a get a results, when I use 4 CPU I get another
one. I do not think that very is a bug.
Do you think that these small differences are normal?
Is there any way to get the same results? is some align problem?
Really really thanks
Di
If you want to remain in the traditional methods (complexity n^3), what you
need is a GEMM (generalized matrix multiplication), and it is provided in
C, for dense matrices, by ScaLAPACK. The implementation provided on your
blog is indeed a rough cut, there are better solutions (matrices divided in
Hi,
what is the best way to multiply two matrices with java-openmpi.
Is the way in this link the right way to do that? Also split the first matrix
row wise and multiply each one with the second matrix (each row on a processor)
then collect the results.
Link:
https://anjanavk.wordpress.com/2011
11 matches
Mail list logo