Daniel, Thanks again. To summarize, I want to implement the gradients and the laplacian functions for the SymmetricTensor class.
The laplacian (trace of hessian) for the fe_values block extractor is straightforward, but the gradients (of the solution field) for the SymmetricTensor has some issues. e.g., what would be the type for the gradient of SymmetricTensor<2,dim> ? how to represent this --> [(S_1,x),(S_1,y);(S_2,x),(S_2,y);(S_3,x),(S_3,y)]... Each component of the SymmetricTensor is a vector. Best, Metin On Tuesday, October 4, 2016 at 10:15:00 PM UTC+2, Daniel Arndt wrote: > > Metin, > > I understand that the components are in space (x/y/z), >> and (*shape_gradient_ptr++)[d] would be derivative of phi in each >> direction; is that correct? >> > Yes, that is the derivative of the dth component in direction d. > >> >> (compared to vectors) the implementation of divergence function for the >> tensors is as following; >> >> *if* (snc != -1) >> >> { >> >> *const* *unsigned* *int* comp = >> >> >> shape_function_data[shape_function].single_nonzero_component_index; >> >> >> *const* dealii::Tensor < 1, *spacedim*> >> *shape_gradient_ptr = >> >> &shape_gradients[snc][0]; >> >> >> *const* TableIndices<2> indices = dealii::Tensor<2, >> *spacedim*>::unrolled_to_component_indices(comp); >> >> *const* *unsigned* *int* ii = indices[0]; >> >> *const* *unsigned* *int* jj = indices[1]; >> >> >> *for* (*unsigned* *int* q_point = 0; q_point < >> n_quadrature_points; >> >> ++q_point, ++shape_gradient_ptr) >> >> { >> >> divergences[q_point][jj] += value * >> (*shape_gradient_ptr)[ii]; >> >> } >> >> } >> > This is similar to the previous one. Note that the ith component of the > divergence of a rank-2-tensor is defined as sum_i \partial_i T_{ij}. > This is what is implemented for the general (non-symmetric) Tensor class. > You have to compare this to the Vector implementation for primitive shape > functions. > Basically, ii corresponds to the first index of the non-zero component and > jj corresponds to the second index of the non-zero component. > We figure the correct component of the divergence add add the appropriate > derivative to it. > For a SymmetricTensor, only T_{ij} for i\leq j is stored.Therefore, we > need to have the line > > if (ii != jj) > divergences[q_point][jj] += value * (*shape_gradient_ptr)[ii]; > > additionally to account for the contribution of T_{ji} add the same time. > > >> There is only the "snc != 1" case, and it seems that the component now >> means the tensor components and not the directions in space. Is this >> function (for tensors) not yet implemented? or is it sth related to >> non-primitive shape functions (I would appreciate if you could comment on >> this as well)? >> > This is only implemented for primitive shape functions. > > Best, > Daniel > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.