Weixiong,
When solving my problem with PETScWrappers using MPI, I saw sth interesting. I
ran my test problem on my Mac with 4 core 8 threads. When using under 4
process, that is "mpirun -np 4 xxx", results are always right. When trying 6
process, "mpirun -np 6 xxx", results are weird (right results are supposed to
be symmetric about diagonal), whatever solver/preconditioner (including
MUMPS). Is this happening whenever using -np exceeding the number of physical
cores? Please see the attachment np4 and np6 for the results I am referring to.
How many cores you have, and where they are located, makes no difference as
far as the computations are concerned. But different numbers of processors may
lead to different meshes. Are the two meshes you compute on (with 4 and 6
processors) the same? If they are not, then you shouldn't expect the solution
to be the same.
Best
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.