Dear Timo,
Sorry, I did not mean to write to your personal e-mail. Just pressed the
wrong reply button by mistake.
Thank you very much for your reply. I will check the way how I set
mpi_communicator now. Hopefully, will find the problem.
I have changed step-40 couple of days ago myself to MUMP
On 09/25/2017 01:25 PM, 'Maxi Miller' via deal.II User Group wrote:
I do not understand the reason why that happens. Is it a mathematical
problem? Or rather a problem in my code? Afaik the gradients should not
depend on the offset, thus I do not know where to look for the problem here.
Did y
I rewrote example 15 for a MPI-environment using Trilinos, and solve it with
IndexSet solution_relevant_partitioning(dof_handler.n_dofs());
DoFTools::extract_locally_relevant_dofs(dof_handler,
solution_relevant_partitioning);
LinearAlgebraTrilinos::MPI::Vector completely_distribute
Anna,
to compute values of the electric field at the receivers I follow the strategy
of ASPECT code as you suggested
To do this I sum the current_point_values across processors and divide by the
number of processors that contain point p as following
// Reduce all collected values into local
and from your email I got off-list (please try to use the mailinglist):
> [0]PETSC ERROR: #1 PetscCommDuplicate() line 137 in
> /home/anna/petsc-3.6.4/src/sys/objects/tagm.c
> An error occurred in line <724> of file
> in function
> void dealii::PETScWrappers::SparseDirectMUMPS::solve(const
>
Anna,
> The main reason I do this is that I do not understand how to reuse this
> decomposition in deal.ii.
> I am relatively new to deal.ii and C++, and I have never used MUMPS before.
Well, this has nothing to do with MUMPS or deal.II. It sounds like you
are struggling because you are not famil
Hi Chih-Che,
FYI, deal.ii in Spack was extended to optionally build with CUDA and tests
pass, see
https://github.com/LLNL/spack/pull/5402#issuecomment-331821313
Regards,
Denis.
On Friday, September 22, 2017 at 1:02:39 AM UTC+2, Chih-Che Chueh wrote:
>
>
>
>
>
>> I'm going to add that inst