Re: [deal.II] Re: Amesos_Superludist with TrilinoWrappers bad performance

2016-07-21 Thread Vinetou Incucuna
Hello, once more again > > 71874 dofs for Cahn-Hilliard part of system, cca 2000-3000 dofs per computer > core I have solved the Navier-Stokes-Cahn-Hilliard system on 32 cores as decoupled. Algorithm: 1)assemble, solver Cahn-Hilliard->71874 dofs 2)assemble,solve Navier-Stokes-> 2773956

[deal.II] How to apply boundary constraints in time-dependent and adaptive code?

2016-07-21 Thread Junchao Zhang
Hello, I want to know how apply boundary constraints in a time-dependent, adaptively refined and distributed memory code. In a time-step, I want multiple refinements. I could not find such an example in deal.II tutorial. I have the following code, with questionable code in red. Basically, whe

Re: [deal.II] Re: Amesos_Superludist with TrilinoWrappers bad performance

2016-07-21 Thread Bruno Turcksin
Marek, 2016-07-21 15:59 GMT-04:00 Vinetou Incucuna : > I suppose, that the next > step will be an solver for general systems, > like > TrilinosWrappers::SolverBicgstab or TrilinosWrappers::SolverGMRES > ? Yes that's right. However, the performance of Krylov solvers depends on good preconditioni

Re: [deal.II] Re: Amesos_Superludist with TrilinoWrappers bad performance

2016-07-21 Thread Vinetou Incucuna
Hello, thank you for the answer. I suppose, that the next step will be an solver for general systems, like TrilinosWrappers::SolverBicgstab or TrilinosWrappers::SolverGMRES ? M 2016-07-21 21:39 GMT+02:00 Bruno Turcksin : > Marek, > > On Thursday, July 21, 2016 at 3:17:48 PM UTC-4, Marek Čap

[deal.II] Re: Amesos_Superludist with TrilinoWrappers bad performance

2016-07-21 Thread Bruno Turcksin
Marek, On Thursday, July 21, 2016 at 3:17:48 PM UTC-4, Marek Čapek wrote: > > > Hello, > I have set up the Navier-Stokes-Cahn-Hilliard system in way of > step-40 with TrilinosWrappers backend. > I have assembled the phase-field part of system in reasonable time 3s, > however > the solve with Ames

[deal.II] Amesos_Superludist with TrilinoWrappers bad performance

2016-07-21 Thread Marek Čapek
Hello, I have set up the Navier-Stokes-Cahn-Hilliard system in way of step-40 with TrilinosWrappers backend. I have assembled the phase-field part of system in reasonable time 3s, however the solve with Amesos_Superludist lasted cca 700s. I have cca 32768 cells, 71874 dofs for Cahn-Hilliard part

[deal.II] Re: Floquet periodic conditions and complex valued algebra

2016-07-21 Thread Daniel Garcia
Hi, I forgot to mention that I work with the elastic wave equation in the frequency domain: u(x,y,z,t) = u(x,y,z)*exp(i*omega*t) Thanks, Daniel On Thursday, July 21, 2016 at 7:13:16 PM UTC+2, Daniel Garcia wrote: > > Hi all, > > I'm an experimental physicist. Although I do some theoretical work

[deal.II] Floquet periodic conditions and complex valued algebra

2016-07-21 Thread Daniel Garcia
Hi all, I'm an experimental physicist. Although I do some theoretical work as well. I'm looking for an opensource FEM library. I took a look to deal.ii and it looks great. Before I start to use it I would like to know if you think that it is be possible do the following calculations. If you t

Re: [deal.II] Re: Transferring solutions in distributed computing

2016-07-21 Thread Daniel Arndt
Junchao, It seems that the documentation is outdated for this piece of information. In fact, neither PETScWrapper::MPI::Vector nor TrilinosWrappers::MPI::Vector does have update_ghost_values. What you should do is exactly what is done in the few lines of step-42 you referenced. "solution = distri

Re: [deal.II] Re: Transferring solutions in distributed computing

2016-07-21 Thread Junchao Zhang
Daniel, The link you provides is very helpful. Thanks. In the code, I see solution_transfer.interpolate(distributed_solution); constraints_hanging_nodes.distribute(distributed_solution); solution = distributed_solution; I am confused by the postprocessing. I think distributed_solution does not h

[deal.II] Decoupling FECollection and QCollection

2016-07-21 Thread Deepak Gupta
Dear All, I am trying to use hp::FECollection and hp::QCollection in my work. For QCollection, I can read as follows from the available online document: "*The quadrature rules have to be added in the same order as for the FECollection

[deal.II] Re: Transferring solutions in distributed computing

2016-07-21 Thread Daniel Arndt
Junchao, You want to use parallel::distributed::SolutionTransfer instead if you are on a parallel::distributed::Triangulation Executing $ grep -r "parallel::distributed::SolutionTransfer" . in the examples folder, tells me that this object is used in step-32, step-42 and step-48. Have for exampl