[deal.II] Re: Problems in solving two PDE systems using MPI+Thread parallel computing

2017-05-19 Thread Jack
Hi Bruno, I appreciated it very much for your responses. The Matrices and Vectors are of TrilinosWrappers, the solvers should use MPI, too. I used the OpenMPI 1.8.1. The release date is Apr 22 2014, later than the post on the Github. I initialized the MPI as following try { us

[deal.II] Re: Getting error during cmake configuration step and later during make run

2017-05-19 Thread Vikram Bhamidipati
Hi Jean-Paul, Thank you for that link. Glad to know I was not the only one to have that issue (illegal instruction 4). Hope the underlying cause is identified in some later version of the installer. Thanks, Vikram On Thursday, May 18, 2017 at 3:44:43 PM UTC-5, Jean-Paul Pelteret wrote: > > Hi

[deal.II] Re: "operator+=" error in the assemble_system()

2017-05-19 Thread Jean-Paul Pelteret
Dear Kyusik, Firstly (and just so that we're all clear on what you're trying to implement), was I correct that your comment that I linked contained the description of the integral that you are trying to compute? > As far as I know fe_values2.shape_value(i,q_index) is vector-valued shape > fu

[deal.II] Re: "operator+=" error in the assemble_system()

2017-05-19 Thread hanks0227
Dear Bruuno and Jean-Paul First of all thank you for your reply I'm sorry, I should have been more specific about my question As far as I know fe_values2.shape_value(i,q_index) is vector-valued shape function(fe2 is distributed by "dof_handler2.distribute_dofs (fe2)") and sol_grad is also

[deal.II] Re: "operator+=" error in the assemble_system()

2017-05-19 Thread Jean-Paul Pelteret
Dear Kyusik, I gather that what you are trying to implement here is the equation that you present in this post . I concur with Bruno's assessment of the problem - unfortunately your implementation is incorrect (compare what you'v

[deal.II] Re: "operator+=" error in the assemble_system()

2017-05-19 Thread Bruno Turcksin
Kyusik, On Friday, May 19, 2017 at 8:27:39 AM UTC-4, hanks0...@gmail.com wrote: > > > std::vector > sol_grad(n_q_points); > > cell_rhs(i) += fe_values2.shape_value(i,q_index) * > sol_grad[q_index] *(F/p(0)/grad_sol_sq)* > fe_

[deal.II] "operator+=" error in the assemble_system()

2017-05-19 Thread hanks0227
Dear All, I'm trying to use two FE_Q in my assemble_system() to project the gradient of scalar solution onto the vector-valued FE space (that is, one is for scalar solution and the other is for vector) my assemble_q() is as follow... template void Step6::assemble_q () { const QGauss quadra

[deal.II] Re: Problems in solving two PDE systems using MPI+Thread parallel computing

2017-05-19 Thread Bruno Turcksin
Jack, are your solvers using MPI? This looks similar to this problem https://github.com/open-mpi/ompi/issues/1081 Which version of MPI are you using? How do you initialize MPI? MPI_InitFinalize set MPI_THREAD_SERIALIZED which "tells MPI that we might use several threads but never call two MPI

[deal.II] Problems in solving two PDE systems using MPI+Thread parallel computing

2017-05-19 Thread Jack
Dear all, I’m trying to solve the thermal diffusion and Stokes flow problem simultaneously, similar to the step-32. I opened two threads to solve thermal diffusion and Stokes equations during solving by linear solvers (the former is solved by CG solver and the latter is solved by GMRES so