Alexander,
I have written a small feature enhancement that allows MGTransferPrebuilt to
be used with parallel PETSc matrices. If possible I would like to contribute
this change back to the main development branch. I guess that I should submit
a pull request but I made changes that I would lik
On 03/19/2018 12:47 PM, Ben Shields wrote:
It is setting up the matrices which is the bottleneck, specifically
setting up the off-diagonal blocks of the matrix. For context, with a
moderately sized system of 200,000 dofs, setting up the main NxN
sparsity pattern using DoFTools::make_sparsity_pa
Hello everyone!
This is deal.II newsletter #24.
It automatically reports recently merged features and discussions about the
deal.II finite element library.
## Below you find a list of recently proposed or merged features:
#6064: Avoid deriving from std::iterator (proposed by masterleinad)
htt
It is setting up the matrices which is the bottleneck, specifically setting
up the off-diagonal blocks of the matrix. For context, with a moderately
sized system of 200,000 dofs, setting up the main NxN sparsity pattern
using DoFTools::make_sparsity_pattern and reinitializing the matrix using
t
That works, thanks!
Am Montag, 19. März 2018 14:41:05 UTC+1 schrieb Wolfgang Bangerth:
>
> On 03/19/2018 07:36 AM, 'Maxi Miller' via deal.II User Group wrote:
> > That gives me
> > |
> > error:nomatching functionforcall to
> > ‘max(double&,dealii::TrilinosWrappers::internal::VectorReference)’
On 03/19/2018 07:36 AM, 'Maxi Miller' via deal.II User Group wrote:
That gives me
|
error:nomatching functionforcall to
‘max(double&,dealii::TrilinosWrappers::internal::VectorReference)’
max_TE_value =std::max(max_TE_value,old_solution(i));
|
during compilation.
Then just u
That gives me
error: no matching function for call to ‘max(double&, dealii::
TrilinosWrappers::internal::VectorReference)’
max_TE_value = std::max(max_TE_value, old_solution(i));
during compilation.
Am Montag, 19. März 2018 14:22:55 UTC+1 schrieb Wolfgang Bangerth:
>
> On 03/19/
On 03/19/2018 07:13 AM, 'Maxi Miller' via deal.II User Group wrote:
Assumed I do the latter, do you have any example codes for that? I get the
component mask, but do not know how to combine the mask with my vector.
The mask is a set of n_dofs true/false values. You'd then do something like
fo
Assumed I do the latter, do you have any example codes for that? I get the
component mask, but do not know how to combine the mask with my vector.
Thanks!
Am Samstag, 17. März 2018 03:46:38 UTC+1 schrieb Wolfgang Bangerth:
>
> On 03/16/2018 02:38 AM, 'Maxi Miller' via deal.II User Group wrote:
>
Hello everyone,
I have written a small feature enhancement that allows MGTransferPrebuilt
to be used with parallel PETSc matrices. If possible I would like to
contribute this change back to the main development branch. I guess that I
should submit a pull request but I made changes that I would
10 matches
Mail list logo