Re: [deal.II] Help with step-7 Neumann boundary conditions

2022-02-14 Thread Wolfgang Bangerth
On 2/11/22 09:54, Ali Seddiq wrote: Thanks very much for your advice. I definitely should and will follow that. But may I still narrow my question for a clarification (with upcoming complexity), and as a quick question ask if adding the boundary term through cell_rhs +=... (as above, and very

Re: [deal.II] Error in Parallel version of Step-26

2022-02-14 Thread Wolfgang Bangerth
Syed, the error message is not particularly good, but I think what is happening is that you are passing in a fully distributed vector to a function that expects a vector that has ghost elements for locally relevant but not locally owned elements. Best W. On 2/11/22 07:40, syed ansari wrot

Re: [deal.II] Impose values inside a material/ not on the boundary

2022-02-14 Thread Wolfgang Bangerth
On 2/11/22 12:12, 'Markus Mehnert' via deal.II User Group wrote: *** Caution: EXTERNAL Sender *** Dear Wolfgang, Thank you for your quick response. The dofs per face are defined as / / / / /const unsigned int dofs_per_face = fe_cell.dofs_per_face;/ / / where fe_cell is the/FESystem/ that consis

Re: [deal.II] MPI & component_wise

2022-02-14 Thread Wolfgang Bangerth
On 2/14/22 15:35, Joss G. wrote: ** Thank you for your answer. I am trying to substitute my PETScWrappers::MPI::Vector for  parallel::distributed::Vector (locally_relevant_solution and system_rhs) but my lac library does not contain the filee *deal.II/lac/parallel_vector.h. *I even tried to

Re: [deal.II] MPI & component_wise

2022-02-14 Thread Joss G.
Thank you for your answer. I am trying to substitute my PETScWrappers::MPI::Vector for parallel::distributed::Vector (locally_relevant_solution and system_rhs) but my lac library does not contain the filee *deal.II/lac/parallel_vector.h. *I even tried to get the latest version but is not the

Re: [deal.II] MPI & component_wise

2022-02-14 Thread Wolfgang Bangerth
On 2/14/22 08:50, Joss G. wrote: I am having an error when running more than 1 core (with MPI) in a similar implementation to step-40 when using a component wise ordering: DoFRenumbering::component_wise(dof_handler) in the setup_syste() function. Is ii possible to do what I am trying to do

[deal.II] MPI & component_wise

2022-02-14 Thread Joss G.
Dear all, I am having an error when running more than 1 core (with MPI) in a similar implementation to step-40 when using a component wise ordering: DoFRenumbering::component_wise(dof_handler) in the setup_syste() function. Is ii possible to do what I am trying to do? Thank you *Error:*

[deal.II] Re: Smooth particles hydrodynamic implementation in DealII

2022-02-14 Thread blais...@gmail.com
Dear Hassan, Everything is described there: https://arxiv.org/abs/2106.09576 The algorithm is pretty simple. For every cell, we find the list of possible neighbours. We define this list by looking at every cell which shares a vertex with a cell we are presently working with. For all particles,

[deal.II] Re: Smooth particles hydrodynamic implementation in DealII

2022-02-14 Thread Hassan N
Dear Bruno That is what I am looking for. Surely its good to have them in deal.II. Otherwise, could you please describe shortly the algorithm that you used to do that. Thanks, Hassan On Sunday, February 13, 2022 at 6:21:42 PM UTC+3:30 blais...@gmail.com wrote: > Dear Hassan, > There is no