[deal.II] Re: Adding an independent DoF to a Trilinos system

2018-03-20 Thread Ben Shields
Thank you so much for your help Wolfgang. I'll keep looking into that line which is still causing me trouble, but hopefully this will alleviate the biggest chunk of the bottleneck. -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.go

Re: [deal.II] Re: Iterating over all the entries in a PETScWrapper::MPI::SparseMatrix in parallel

2018-03-20 Thread Wolfgang Bangerth
On 03/18/2018 04:41 PM, Feimi Yu wrote: Please ignore my last post. I made a mistake there. Attached is the revised version to better illustrate the problem. Great, much appreciated -- a most excellent testcase! I can reproduce the problem and have something that may work. Will finish this tom

Re: [deal.II] Re: Adding an independent DoF to a Trilinos system

2018-03-20 Thread Wolfgang Bangerth
Ben, These two lines  added_sp.block(0, 1).copy_from(col);  added_sp.block(1, 0).copy_from(row); account for about 60% of the bottleneck (considering only the runtime for the piece of code in the original post). These I can give back to you: https://github.com/dealii/dealii/pull/6081 :

[deal.II] Re: Adding an independent DoF to a Trilinos system

2018-03-20 Thread Ben Shields
> > Can you narrow down where the time is lost? These two lines added_sp.block(0, 1).copy_from(col); added_sp.block(1, 0).copy_from(row); account for about 60% of the bottleneck (considering only the runtime for the piece of code in the original post). The other 40% of the bottleneck is f

Re: [deal.II] Contributing back a small feature enhancement (MGTransferPrebuilt parallel PETSc support)

2018-03-20 Thread Wolfgang Bangerth
On 03/20/2018 10:43 AM, Alexander Knieps wrote: I think that makes sense. Are the other MG classes all instantiated for dim != spacedim? After looking a bit more into this, answer by .inst.in file in source/multigrid: - mg_transfer_prebuilt.inst.in - No - mg_transfer_matrix_free.

Re: [deal.II] Contributing back a small feature enhancement (MGTransferPrebuilt parallel PETSc support)

2018-03-20 Thread Alexander Knieps
Dear Wolfgang, thank you for your response. I will post the pull request on GitHub end of this week, so that it can be reviewed. I will give a detailed reasoning for each modification there. I think that makes sense. Are the other MG classes all instantiated for dim > != > spacedim? > After

Re: [deal.II] cell iterator for data post processor

2018-03-20 Thread Wolfgang Bangerth
Prateek, I want to calculate output data that are specific to each cell. Each cell has a different material property. I am plotting the data using the function /compute_derived_quantities_vector(..) /under DataPostprocessor class. Is there any way to know which cell is currently being calcu

[deal.II] Postdoc @ SISSA mathLab, Trieste -- numerical modeling and simulation of nanometric electronic devices

2018-03-20 Thread luca.heltai
Dear all, the following announcement may be of interest to you or to some of your students. We are actively seeking for a deal.II experienced programmer ASAP! Best, Luca. -- A one year postdoctoral position is available at SISSA @ mathLab, starting from May 1, 2018 (see https://goo.gl/247WN

[deal.II] cell iterator for data post processor

2018-03-20 Thread Prateek Sharma
Hello, I want to calculate output data that are specific to each cell. Each cell has a different material property. I am plotting the data using the function *compute_derived_quantities_vector(..) *under DataPostprocessor class. Is there any way to know which cell is currently being calculat