Hi Daniel,
the problem here is independent of any data set. So, you can also see it in
paraView in the 'solid color' block. It seems to be an artificial curvature
within the element.
To my best knowledge it is not required to update any geometry related data.
What do you think?
Regards,
David
What Martin is trying to say (correct me if I am wrong) is that you cannot
rely on any ordering of the call of the cell/inner-face/boundary-face
functions, since they might be called (synonym: scheduled) in parallel in
the case of threading. That means that the code snippet you have posted (if
I see additional problems with separate communicators per object:
1. For me it is unclear how operations that involve more than one
object should communicate. For example, a mat-vec has 2 vectors (src,
dst) and a matrix and as such 3 communicators. Of course you can make
an arbitrary choice here, b
On 2/19/21 1:31 PM, Wells, David wrote:
That's frightening and now I'm not sure how PETSc avoids this problem, and I
am somewhat afraid to even look.
It would not greatly surprise me if MPI implementations now recycle
communicators that have been freed.
I continue to believe that having ea
Hi Wolfgang,
Thanks for writing this up. I think that, in my circumstance, I will be able to
get away with duplicating communicators since the objects should exist for the
entire program run (i.e,. I might make a few dozen instances at most) - e.g.,
each object can duplicate a communicator and
Giselle,
Instead of setting the library yourself can you try:
-DBLAS_LIBRARY_NAMES:STRING='mkl_core;mkl_sequential'
-DLAPACK_LIBRARY_NAMES:STRING=mkl_intel_lp64
Don't set BLAS/LAPACK_FOUND/LIBRARIES/LINKER_FLAGS. Let CMake find the
libraries and the flags it needs to use.
Best,
Bruno
On
Oh really? I'm going to check it out right now! Debugging parallel
programs is still a bane of my existence so definitely would like to hear
your take on it!
Cheers,
Zachary
On Friday, February 19, 2021 at 1:29:43 PM UTC-6 Wolfgang Bangerth wrote:
> On 2/19/21 12:23 PM, Zachary Streeter wrot
On 2/19/21 12:23 PM, Zachary Streeter wrote:
I have created a PETSc::MPI::SparseMatrix that has a global matrix
distributed across all the processes. I then pass it to SLEPc to solve for
its spectrum. I get the correct spectrum so I am pretty sure everything is
correct. My question is regardin
Hi everyone,
I have created a PETSc::MPI::SparseMatrix that has a global matrix distributed
across all the processes. I then pass it to SLEPc to solve for its spectrum.
I get the correct spectrum so I am pretty sure everything is correct. My
question is regarding strange behavior with more c
Dear Konrad,
Great! thanks for your suggestions, I will get to the bottom of it.
Regards,
Budhyant
On Friday, February 19, 2021 at 3:56:56 AM UTC-6 Konrad Simon wrote:
> Dear Budhyant,,
>
> Hard, to say what should be the paths on your system. Dealing with all
> these large dependencies can b
Dear Budhyant,,
Hard, to say what should be the paths on your system. Dealing with all
these large dependencies can be quite cumbersome. I would say the way you
install Deal.II depends on what you want to do with it. Let me tell you how
I would approach this.
1. If you just want to develop a
Dear All:
I would appreciate it if you could please help me with the installation on
Ubuntu 20.
I have installed p4est, petsc, Trilinos and metis in "~/src/", they are
working individually.
Cmake succeeds in recognizing petsc and metis, however, it fails to
recognize the p4est and Trilinos
12 matches
Mail list logo