Re: [deal.II] constraining dofs across distributed mesh

2019-07-10 Thread Daniel Arndt
> [...] > It works in the serial case. However, it doesn't work when I use multiple > MPI processes. For my actual mesh, it breaks when n_processes >= 5. > Basically I have to make sure that dof 41 is included in the relevant dofs > of all the processes. How to I do that? I think this type of

Re: [deal.II] constraining dofs across distributed mesh

2019-07-10 Thread Reza Rastak
The picture I mentioned can we viewed at this link. https://ibb.co/1nrRvfr Reza On Wednesday, July 10, 2019 at 8:09:51 PM UTC-7, Reza Rastak wrote: > > Thank you Daniel, > > I think the way I described the question was a little vague, so I like to > clarify it. Consider the simplified mesh (sho

Re: [deal.II] constraining dofs across distributed mesh

2019-07-10 Thread Reza Rastak
Thank you Daniel, I think the way I described the question was a little vague, so I like to clarify it. Consider the simplified mesh (shown below) with 30 cells, 42 vertices, and 42 dofs. My desired constraint specifies that all the dofs corresponding to the vertices on the right edge must have

Re: [deal.II] constraining dofs across distributed mesh

2019-07-10 Thread Daniel Arndt
Reza, What exactly do you mean by "link all the dofs of all the nodes along an edge of the body together"? >From the error message above, I assume that you want to identify certain degrees of freedom via a ConstraintMatrix (or AffineConstraints) object. That is basically what DoFTools::make_period

Re: [deal.II] Equivalent option for local_range() for Trilinos vectors

2019-07-10 Thread Daniel Arndt
Vivek, just remove the line if (i->first >= range.first && i->first < range.second) VectorTools::interpolate_boundary_values should only set values for locally active degrees of freedom anyway. As long as the values are consistent between different processes, this should work just fine. Best, Dan

[deal.II] constraining dofs across distributed mesh

2019-07-10 Thread Reza Rastak
Hi, I like to link all the dofs of all the nodes along an edge of the body together. So I find the first dof (dof i) located on the edge and then I try to link all the dofs on that edge (except i) to dof i. It does not work if I have sufficiently large number of processes where that edge is

[deal.II] Equivalent option for local_range() for Trilinos vectors

2019-07-10 Thread Vivek Kumar
Hi all, I had a legacy code were PETScWrappers::MPI::Vectors were used to for parallel computing. For very large problems, I was told to move to Trilinos. It was fairly easy to do so for most part of the code but I am stuck in the implementation of boundary values. Currently the boundary value

[deal.II] Re: Can I get the cell numbers of the cells neighbouring a face?

2019-07-10 Thread Bruno Turcksin
Stephen On Wednesday, July 10, 2019 at 3:56:02 PM UTC-4, Stephen wrote: > > I have an error estimator which I need to calculate between edges and I'd > like to loop over all edges in the triangulation. I know that I can do this > the "traditional" way via looping over all cells and then looping

[deal.II] Can I get the cell numbers of the cells neighbouring a face?

2019-07-10 Thread Stephen
I have an error estimator which I need to calculate between edges and I'd like to loop over all edges in the triangulation. I know that I can do this the "traditional" way via looping over all cells and then looping over all faces in the cell but, if I do this, I end up calculating the same thin

Re: [deal.II] Installation error on Haswell nodes on Cori at NERSC (failed AVX512 support)

2019-07-10 Thread Martin Kronbichler
Dear Steve, >From what I can see the failure is for the expand_instantiations script of deal.II, which is compiled as part of deal.II. It uses slightly different flags as the full install, but assuming that you either passed -xHASWELL or -xCORE-AVX2 to CMAKE_CXX_FLAGS it should not generate that c

[deal.II] Installation error on Haswell nodes on Cori at NERSC (failed AVX512 support)

2019-07-10 Thread Stephen DeWitt
Hello, I'm trying to install deal.II on the Haswell nodes on Cori at NERSC using the Intel compiler. I'm using deal.II version 9.0, because support for a few of the function calls I make were dropped in v9.1 and I haven't had a chance to modify the sections of the code. In my CMake command, I'm

Re: [deal.II] Output subset of cells

2019-07-10 Thread Alexander
Thanks, Daniel, looks appropriate! On Friday, July 5, 2019 at 8:05:42 PM UTC+2, Daniel Arndt wrote: > > Alexander, > > have a look at > https://www.dealii.org/developer/doxygen/deal.II/classDataOut.html#afdefbb966f2053f4c4fc52293d2339ce > . > You just need to overload DataOut::next_cell(). > > B