Wolfgang, I am using deal.ii 8.5.0-pre so the patch is in my version. I forgot to change my value of reputations for my triangulation to take the vector arguments. The updated code produces the error on the coarse mesh, but works as desired on the finer mesh. Since I am still new to parallel computing I was trying to run on a mesh that was too coarse for 6 and 12 processors.
Sorry for having the wrong code the first time. This version will show the error on the coarse mesh. - Justin On Thursday, December 15, 2016 at 12:39:26 PM UTC-5, Wolfgang Bangerth wrote: > > > Justin, > > > I've attached a small test case that simply initializes a parallel > vector and > > then attempts to write it to file via DataOutFaces. The finite element > used > > only lives on the faces of the cells so DataOutFaces in appropriate. > > > > This test case provided me with some insight into my problem. First, > > DataOutFaces can support parallel triangulations. I've also attached a > figure > > showing the subdomains for the test (note that this is from grouping the > > individual file outputs in Paraview not writing generating a subdomain > field > > as part of the output). The other thing this test case showed me is to > be > > cautious in the problems I am running when running on multiple > processors. I > > am still new to parallel computing and this problem arose because I was > not > > careful. My triangulation did not have enough cells to partition the > domain > > properly when using a higher number of processors (6 or 12). Increasing > the > > number of cells removed the error and the code ran without an issue. > > > > This is shown in the attached code as well. I ran the code of 4, 6 and > 12 > > processors for two different levels of refinement. The much finer mesh > > executed the code without problem. The coarser mesh showed the errors I > had > > shown on the first post of this thread for 6 and 12 processors. > > I tried to run your code on 6 and 12 processors, but it finishes without > an > error. With the code you sent, do you just have to do > mpirun -np 12 ./test > to trigger the error on your machine? > > What version of deal.II are you using? I (re-)discovered this patch > https://github.com/dealii/dealii/pull/2054 > that is part of deal.II 8.4 I believe. > > Best > W. > > -- > ------------------------------------------------------------------------ > Wolfgang Bangerth email: bang...@colostate.edu > <javascript:> > www: http://www.math.colostate.edu/~bangerth/ > > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.
#include <deal.II/base/exceptions.h> #include <deal.II/base/logstream.h> #include <deal.II/lac/generic_linear_algebra.h> // uncomment the following #define if you have PETSc and Trilinos installed // and you prefer using Trilinos in this example: // #define FORCE_USE_OF_TRILINOS // This will either import PETSc or TrilinosWrappers into the namespace // LA. Note that we are defining the macro USE_PETSC_LA so that we can detect // if we are using PETSc (see solve() for an example where this is necessary) namespace LA { #if defined(DEAL_II_WITH_PETSC) && !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS)) using namespace dealii::LinearAlgebraPETSc; # define USE_PETSC_LA #elif defined(DEAL_II_WITH_TRILINOS) using namespace dealii::LinearAlgebraTrilinos; #else # error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required #endif } #include <deal.II/lac/petsc_parallel_vector.h> #include <deal.II/grid/tria.h> #include <deal.II/grid/tria_accessor.h> #include <deal.II/grid/grid_generator.h> #include <deal.II/dofs/dof_handler.h> #include <deal.II/dofs/dof_tools.h> #include <deal.II/fe/fe_dgq.h> #include <deal.II/fe/fe_system.h> #include <deal.II/fe/fe_face.h> #include <deal.II/numerics/data_out_faces.h> #include <deal.II/base/utilities.h> #include <deal.II/base/index_set.h> #include <deal.II/distributed/tria.h> #include <deal.II/distributed/grid_refinement.h> #include <deal.II/base/conditional_ostream.h> #include <fstream> #include <iostream> int main (int argc, char **argv) { const unsigned int dim = 2; try { using namespace dealii; // This test case shows that the DataOutFaces class is able to support parallel triangulations. // The main problem that I was experiencing was the mesh I was testing on was too coarse for // larger number of processors. This test case shows that as well. For 4 processors the code // produces output without error for both the 12 repitions and the 2 repititions. For 6 and 12 // processors only the 12 repitition case produces the proper output. Fortunately it does show // as long as the mesh is adequately refined DataOutFaces produces the output for each subdomain. Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1); { MPI_Comm mpi_communicator(MPI_COMM_WORLD); parallel::distributed::Triangulation<dim> triangulation(mpi_communicator, typename Triangulation<dim>::MeshSmoothing (Triangulation<dim>::smoothing_on_refinement | Triangulation<dim>::smoothing_on_coarsening)); ConditionalOStream pcout(std::cout, (Utilities::MPI::this_mpi_process(mpi_communicator) == 0)); FESystem<dim> fe(FE_FaceQ<dim>(2), dim); DoFHandler<dim> dof_handler(triangulation); std::vector<unsigned int> reps(2); reps[0] = 12; reps[1] = 2; for (unsigned int i=0; i<reps.size();++i) { triangulation.clear(); GridGenerator::subdivided_hyper_cube (triangulation, reps[i], 0, 1); dof_handler.distribute_dofs(fe); LA::MPI::Vector locally_relevant_sol; IndexSet locally_owned_dofs; IndexSet locally_relevant_dofs; locally_owned_dofs = dof_handler.locally_owned_dofs(); DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs); locally_relevant_sol.reinit(locally_owned_dofs, locally_relevant_dofs, mpi_communicator); locally_relevant_sol = 2.; const std::string face_out = ("output-q" + Utilities::int_to_string(2,1) + "-i" + Utilities::int_to_string(i,1) + "." + Utilities::int_to_string (triangulation.locally_owned_subdomain(), 4)); std::ofstream face_output((face_out + ".vtu").c_str()); DataOutFaces<dim> data_out_face(false); std::vector<std::string> face_names; for (unsigned int d=0; d<dim; ++d) face_names.push_back ("vector"); std::vector<DataComponentInterpretation::DataComponentInterpretation> face_component_type; for (unsigned int d=0; d<dim; ++d) face_component_type.push_back (DataComponentInterpretation::component_is_part_of_vector); data_out_face.add_data_vector (dof_handler, locally_relevant_sol, face_names, face_component_type); data_out_face.build_patches (2); data_out_face.write_vtu (face_output); if(i==0) pcout<<"Fine mesh Finished\n"; else pcout<<"Coarse mesh Finished\n"; } } } catch (std::exception &exc) { std::cerr << std::endl << std::endl << "----------------------------------------------------" << std::endl; std::cerr << "Exception on processing: " << std::endl << exc.what() << std::endl << "Aborting!" << std::endl << "----------------------------------------------------" << std::endl; return 1; } catch (...) { std::cerr << std::endl << std::endl << "----------------------------------------------------" << std::endl; std::cerr << "Unknown exception!" << std::endl << "Aborting!" << std::endl << "----------------------------------------------------" << std::endl; return 1; } return 0; }