Found a bug in the library, reported here: https://github.com/dealii/dealii/issues/9405, and therefore I got wrong results.
Am Sonntag, 19. Januar 2020 13:50:58 UTC+1 schrieb Maxi Miller: > > Hei, > I attached a working MWE. Changing between the approach suggested above > and scale() is done by changing the variable use_scale_function. The output > contains both the expected solution and the obtained solution. > My approach for replacing scale() with a local loop is > data.cell_loop(&LaplaceOperator::local_apply_cell, > this, > dst, > src, > [&](const unsigned int start_range, const > unsigned int end_range){ > for(size_t i = start_range; i < end_range; ++i){ > dst.local_element(i) = 0; > } > }, > [&](const unsigned int start_range, const unsigned int > end_range){ > for(unsigned int i = start_range; i < end_range; ++i){ > dst.local_element(i) = dst.local_element(i) * > inv_mass_matrix.local_element(i); > } > }) > > I expect my result to follow the solution function exp(-2 * pi^2 * t) * > sin(pi * x) * sin(pi * y). While that works for the scale()-approach, the > result for integrated approach does not change from time step to time step. > Currently I am only running it with a single MPI thread, after encountering > issues when running with several threads. > Thanks! > > Am Samstag, 18. Januar 2020 18:42:04 UTC+1 schrieb Daniel Arndt: >> >> Maxi, >> >> As usual, it is much easier to help if you provide a complete minimal >> example and say how the result differs from what you expect. >> Does it only scale certain vector entries? Are the results correct when >> running with one MPI process? >> How does your approach differ from >> https://github.com/dealii/dealii/blob/b84270a1d4099292be5b3d43c2ea65f3ee005919/tests/matrix_free/pre_and_post_loops_01.cc#L100-L121 >> ? >> >> Best, >> Daniel >> >> Am Sa., 18. Jan. 2020 um 12:05 Uhr schrieb 'Maxi Miller' via deal.II User >> Group <dea...@googlegroups.com>: >> >>> I tried implementing it as >>> data.cell_loop(&LaplaceOperator::local_apply_cell, >>> this, >>> dst, >>> src, >>> //true, >>> [&](const unsigned int start_range, const >>> unsigned int end_range){ >>> for(size_t i = start_range; i < end_range; ++i){ >>> dst.local_element(i) = 0; >>> } >>> }, >>> [&](const unsigned int start_range, const unsigned int end_range >>> ){ >>> for(unsigned int i = start_range; i < end_range; ++i){ >>> dst.local_element(i) = dst.local_element(i) * >>> inv_mass_matrix.local_element(i); >>> } >>> }); >>> >>> but the result was not correct. Thus, I assume I have to do something >>> else? >>> Thanks! >>> >>> Am Samstag, 18. Januar 2020 17:12:34 UTC+1 schrieb peterrum: >>>> >>>> Yes, like here >>>> https://github.com/dealii/dealii/blob/b84270a1d4099292be5b3d43c2ea65f3ee005919/tests/matrix_free/pre_and_post_loops_01.cc#L100-L121 >>>> >>>> On Saturday, 18 January 2020 12:57:24 UTC+1, Maxi Miller wrote: >>>>> >>>>> In step-48 the inverse mass matrix is applied by moving the inverse >>>>> data into a vector and applying the function scale(), i.e. as in the >>>>> following code: >>>>> data.cell_loop(&LaplaceOperator::local_apply_cell, >>>>> this, >>>>> dst, >>>>> src, >>>>> true); >>>>> computing_times[0] += timer.wall_time(); >>>>> timer.restart(); >>>>> >>>>> dst.scale(inv_mass_matrix); >>>>> >>>>> >>>>> >>>>> Now I would like to do the same, but use a cell_loop instead of >>>>> scale(). Thus, I created an additional function called >>>>> "local_apply_inverse_mass_matrix" as >>>>> template <int dim, int degree, int n_points_1d> >>>>> void LaplaceOperator<dim, degree, n_points_1d>:: >>>>> local_apply_inverse_mass_matrix( >>>>> const MatrixFree<dim, Number> & data, >>>>> LinearAlgebra::distributed::Vector<Number> & dst, >>>>> const LinearAlgebra::distributed::Vector<Number> &src, >>>>> const std::pair<unsigned int, unsigned int> & >>>>> cell_range) const >>>>> { >>>>> (void) data; >>>>> (void) cell_range; >>>>> dst = src; >>>>> dst.scale(inv_mass_matrix); >>>>> } >>>>> >>>>> When running that code, though, using >>>>> LinearAlgebra::distributed::Vector<Number> tmp(src); >>>>> >>>>> data.initialize_dof_vector(tmp); >>>>> data.initialize_dof_vector(dst); >>>>> data.cell_loop(&LaplaceOperator::local_apply_cell, >>>>> this, >>>>> tmp, >>>>> src, >>>>> true); >>>>> computing_times[0] += timer.wall_time(); >>>>> timer.restart(); >>>>> >>>>> data.cell_loop(&LaplaceOperator:: >>>>> local_apply_inverse_mass_matrix, >>>>> this, >>>>> dst, >>>>> tmp, >>>>> true); >>>>> computing_times[1] += timer.wall_time(); >>>>> >>>>> computing_times[3] += 1.; >>>>> >>>>> >>>>> I get the error >>>>> An error occurred in line <3338> of file </opt/dealii/include/deal.II/ >>>>> matrix_free/matrix_free.h> in function >>>>> void dealii::internal::VectorDataExchange<dim, Number, >>>>> VectorizedArrayType>::compress_start(unsigned int, VectorType&) [with >>>>> VectorType = dealii::LinearAlgebra::distributed::Vector<double>; >>>>> typename std::enable_if<(dealii::internal::has_compress_start< >>>>> VectorType>::value && dealii::internal::has_exchange_on_subset< >>>>> VectorType>::value), VectorType>::type* <anonymous> = 0; int dim = 2; >>>>> Number = double; VectorizedArrayType = dealii::VectorizedArray<double, >>>>> 4>] >>>>> The violated condition was: >>>>> vec.has_ghost_elements() == false >>>>> Additional information: >>>>> You are trying to use functionality in deal.II that is currently >>>>> not implemented. In many cases, this indicates that there simply didn't >>>>> appear much of a need for it, or that the author of the original code did >>>>> not have the time to implement a particular case. If you hit this >>>>> exception, it is therefore worth the time to look into the code to find >>>>> out >>>>> whether you may be able to implement the missing functionality. If you >>>>> do, >>>>> please consider providing a patch to the deal.II development sources (see >>>>> the deal.II website on how to contribute). >>>>> >>>>> Stacktrace: >>>>> ----------- >>>>> #0 ./MF_FES_RK4-Test: void dealii::internal::VectorDataExchange<2, >>>>> double, dealii::VectorizedArray<double, 4> >>>>> >::compress_start<dealii::LinearAlgebra::distributed::Vector<double, >>>>> dealii::MemorySpace::Host>, >>>>> (dealii::LinearAlgebra::distributed::Vector<double, >>>>> dealii::MemorySpace::Host>*)0>(unsigned int, >>>>> dealii::LinearAlgebra::distributed::Vector<double, >>>>> dealii::MemorySpace::Host>&) >>>>> #1 ./MF_FES_RK4-Test: void dealii::internal::compress_start<2, >>>>> dealii::LinearAlgebra::distributed::Vector<double, >>>>> dealii::MemorySpace::Host>, double, dealii::VectorizedArray<double, 4>, >>>>> (dealii::LinearAlgebra::distributed::Vector<double, >>>>> dealii::MemorySpace::Host>*)0>(dealii::LinearAlgebra::distributed::Vector<double, >>>>> >>>>> dealii::MemorySpace::Host>&, dealii::internal::VectorDataExchange<2, >>>>> double, dealii::VectorizedArray<double, 4> >&, unsigned int) >>>>> #2 ./MF_FES_RK4-Test: >>>>> dealii::internal::MFWorker<dealii::MatrixFree<2, double, >>>>> dealii::VectorizedArray<double, 4> >, >>>>> dealii::LinearAlgebra::distributed::Vector<double, >>>>> dealii::MemorySpace::Host>, >>>>> dealii::LinearAlgebra::distributed::Vector<double, >>>>> dealii::MemorySpace::Host>, Step40::LaplaceOperator<2, 2, 4>, >>>>> true>::vector_compress_start() >>>>> #3 /opt/dealii/lib/libdeal_II.g.so.9.2.0-pre: >>>>> dealii::internal::MatrixFreeFunctions::MPICommunication::execute() >>>>> #4 >>>>> >>>>> /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2: >>>>> >>>>> >>>>> #5 >>>>> >>>>> /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2: >>>>> >>>>> >>>>> #6 >>>>> >>>>> /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2: >>>>> >>>>> >>>>> #7 >>>>> >>>>> /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2: >>>>> >>>>> >>>>> #8 >>>>> >>>>> /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2: >>>>> >>>>> >>>>> #9 /lib64/libpthread.so.0: >>>>> #10 /lib64/libc.so.6: clone >>>>> >>>>> Why that, and how can I fix it? >>>>> Thanks! >>>>> >>>> -- >>> The deal.II project is located at http://www.dealii.org/ >>> For mailing list/forum options, see >>> https://groups.google.com/d/forum/dealii?hl=en >>> --- >>> You received this message because you are subscribed to the Google >>> Groups "deal.II User Group" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to dea...@googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/dealii/a3c92a70-323e-48a5-9490-58e0b72b8860%40googlegroups.com >>> >>> <https://groups.google.com/d/msgid/dealii/a3c92a70-323e-48a5-9490-58e0b72b8860%40googlegroups.com?utm_medium=email&utm_source=footer> >>> . >>> >> -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/85049730-097a-45a0-93f5-c7079f7b9789%40googlegroups.com.