No adaptivity.
I suspect it would not be about the weak form directly. So, every time I
give an set of initial cells, I need to make sure total initial cell number
N_cell_tot % n_process == 0. If this condition is fulfilled, solution will
always be correct. Say, I want to use 5 processors, then
I suspect it would not be about the weak form directly.
So, every time I give an set initial cells, I need to make sure total
initial cell number N_cell_tot % n_process == 0. If this condition is
fulfilled, solution will always be correct.
Say, I want to use 5 processors, then I have to give N_
Victor,
On Thursday, July 6, 2017 at 3:22:04 PM UTC-4, Victor Eijkhout wrote:
> > add the flag that you need in SET(Trilinos_CXX_COMPILER_FLAGS)
>
> build@build-BLDCHROOT:12.10.1> find . -name \*.cmake -exec grep
> COMPILER_FLAGS {} \; -print
>
Denis:
I can not remove pytrilinos because I actually need it in trilinos.
Bruno:
> add the flag that you need in SET(Trilinos_CXX_COMPILER_FLAGS)
build@build-BLDCHROOT:12.10.1> find . -name \*.cmake -exec grep
COMPILER_FLAGS {} \; -print
On 07/06/2017 09:02 AM, Weixiong Zheng wrote:
Both are generated with subdivided hyper rectangle with same reps and diagonals.
And no adaptive refinement?
In that case, the solution *should* be the same, assuming your linear solver
tolerance is small enough. If it isn't, there is probably a b
Phil,
the names for BLAS and LAPACK are changed here
https://github.com/dealii/candi/blob/master/deal.II-toolchain/packages/trilinos.package#L76
Maybe you can add a cecho in the function to see if confopts is set
correctly.
Best,
Bruno
2017-07-06 10:56 GMT-04:00 Phil H :
> Hi Bruno,
>
> No, I c
Both are generated with subdivided hyper rectangle with same reps and diagonals.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google
Hi Bruno,
No, I changed the MKL to ON, which is why this is strange.
Cheers,
Phil
On Thursday, July 6, 2017 at 10:14:53 AM UTC-4, Bruno Turcksin wrote:
>
> Phil,
>
> did you forget to change MKL=OFF to MKL=ON, in candi.cfg (
> https://github.com/dealii/candi/blob/master/candi.cfg#L68)? This
Phil,
did you forget to change MKL=OFF to MKL=ON, in candi.cfg
(https://github.com/dealii/candi/blob/master/candi.cfg#L68)? This should
change the name from blas to whatever is the name for mkl.
Best,
Bruno
On Thursday, July 6, 2017 at 9:28:22 AM UTC-4, Phil H wrote:
>
> Hello I'm using candi
Weixiong,
When solving my problem with PETScWrappers using MPI, I saw sth interesting. I
ran my test problem on my Mac with 4 core 8 threads. When using under 4
process, that is "mpirun -np 4 xxx", results are always right. When trying 6
process, "mpirun -np 6 xxx", results are weird (right r
Hello I'm using candi to install deal.ii on a brand new cluster.
The new cluster is geared up to use intel, but I can try and use gcc. The
modules I load to begin with are:
1) nixpkgs/.16.09 (H,*S*) 3) gcccore/.5.4.0 (H) 5) openmpi/2.1.1 (
*m*)
2) StdEnv/2016.4 (*S*) 4) gcc/5.4
Hi deal.ii community,
I am bothering today about a code I have been writing in the shared
triangulation framework and its transition to the parallel::distributed::
Triangulation framework.
The shared version works just fine, and so does the parallel if run with 1
single process. I understand
Dear Simon,
1. Track deal.II's remote master branch
git remote add dealii g...@github.com:dealii/dealii.git
2. Fetch the up-to-date master from the deal.II remote
git fetch dealii
3. Merge or rebase as necessary:
git merge dealii/master
or
git rebase master
In all likelihood "rebasing" is the b
Hello all
Thanks a lot for your replies. I managed to submit a pull request (the
video lecture was quite helpful). I hope I did it right, otherwise I'm
happy for feedback.
I will try to implement the error computation using Daniels suggestions.
Now a follow-up question regarding development wi
14 matches
Mail list logo