Wolfgang,

I've attached a small test case that simply initializes a parallel vector 
and then attempts to write it to file via DataOutFaces. The finite element 
used only lives on the faces of the cells so DataOutFaces in appropriate. 

This test case provided me with some insight into my problem. First, 
DataOutFaces can support parallel triangulations. I've also attached a 
figure showing the subdomains for the test (note that this is from grouping 
the individual file outputs in Paraview not writing generating a subdomain 
field as part of the output). The other thing this test case showed me is 
to be cautious in the problems I am running when running on multiple 
processors. I am still new to parallel computing and this problem arose 
because I was not careful. My triangulation did not have enough cells to 
partition the domain properly when using a higher number of processors (6 
or 12). Increasing the number of cells removed the error and the code ran 
without an issue. 

This is shown in the attached code as well. I ran the code of 4, 6 and 12 
processors for two different levels of refinement. The much finer mesh 
executed the code without problem. The coarser mesh showed the errors I had 
shown on the first post of this thread for 6 and 12 processors. 

Thank you for your time and helping me get to the bottom of my issue. I'm 
glad we were able to figure it out and that parallel support had already 
been extended to DataOutFaces.

- Justin


On Wednesday, December 14, 2016 at 2:06:07 AM UTC-5, Wolfgang Bangerth 
wrote:
>
> On 12/13/2016 10:13 AM, Justin Kauffman wrote: 
> > 
> > Upon further investigation you are correct. If I go beyond 4 processors 
> I get 
> > the same error even with the default zero subdivisions. 
>
> Justin -- I suspect that the class simply does not support parallel 
> triangulations. Do you have any suspicion that it may? 
>
> In any case, the error message should be better. If you want, create a 
> minimal 
> testcase that shows the problem and we can at least find the place where 
> one 
> should add a more meaningful error message. 
>
> Best 
>   W. 
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
#include <deal.II/base/exceptions.h>
#include <deal.II/base/logstream.h>
#include <deal.II/lac/generic_linear_algebra.h>


// uncomment the following #define if you have PETSc and Trilinos installed
// and you prefer using Trilinos in this example:
// #define FORCE_USE_OF_TRILINOS

// This will either import PETSc or TrilinosWrappers into the namespace
// LA. Note that we are defining the macro USE_PETSC_LA so that we can detect
// if we are using PETSc (see solve() for an example where this is necessary)
namespace LA
{
#if defined(DEAL_II_WITH_PETSC) && !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS))
    using namespace dealii::LinearAlgebraPETSc;
#  define USE_PETSC_LA
#elif defined(DEAL_II_WITH_TRILINOS)
    using namespace dealii::LinearAlgebraTrilinos;
#else
#  error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required
#endif
}

#include <deal.II/lac/petsc_parallel_vector.h>

#include <deal.II/grid/tria.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/grid_generator.h>

#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>

#include <deal.II/fe/fe_dgq.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/fe_face.h>

#include <deal.II/numerics/data_out_faces.h>
#include <deal.II/base/utilities.h>
#include <deal.II/base/index_set.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/distributed/grid_refinement.h>

#include <fstream>
#include <iostream>


int main (int argc, char **argv)
{
    const unsigned int dim = 2;
    
    try
    {
        using namespace dealii;
        
        // This test case shows that the DataOutFaces class is able to support parallel triangulations.
        // The main problem that I was experiencing was the mesh I was testing on was too coarse for
        // larger number of processors. This test case shows that as well. For 4 processors the code
        // produces output without error for both the 12 repitions and the 2 repititions. For 6 and 12
        // processors only the 12 repitition case produces the proper output. Fortunately it does show
        // as long as the mesh is adequately refined DataOutFaces produces the output for each subdomain.
        
        Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
        {
            MPI_Comm                                  mpi_communicator(MPI_COMM_WORLD);
            parallel::distributed::Triangulation<dim> triangulation(mpi_communicator,
                                                                    typename Triangulation<dim>::MeshSmoothing
                                                                    (Triangulation<dim>::smoothing_on_refinement |
                                                                     Triangulation<dim>::smoothing_on_coarsening));
            FESystem<dim>                             fe(FE_FaceQ<dim>(2), dim);
            DoFHandler<dim>                           dof_handler(triangulation);
            std::vector<unsigned int>                 reps(2);
            reps[0] = 12;
            reps[1] = 2;
            
            for (unsigned int i=0; i<reps.size();++i) {
                triangulation.clear();
                GridGenerator::subdivided_hyper_cube (triangulation, 12, 0, 1);
                dof_handler.distribute_dofs(fe);
                
                LA::MPI::Vector locally_relevant_sol;
                IndexSet        locally_owned_dofs;
                IndexSet        locally_relevant_dofs;
                
                locally_owned_dofs = dof_handler.locally_owned_dofs();
                DoFTools::extract_locally_relevant_dofs(dof_handler,
                                                        locally_relevant_dofs);
                locally_relevant_sol.reinit(locally_owned_dofs,
                                            locally_relevant_dofs,
                                            mpi_communicator);
                
                locally_relevant_sol = 2.;
                
                const std::string face_out = ("output-q" + Utilities::int_to_string(2,1) +
                                              "-i" + Utilities::int_to_string(1,i) +
                                              "." +
                                              Utilities::int_to_string
                                              (triangulation.locally_owned_subdomain(), 4));
                
                std::ofstream face_output((face_out + ".vtu").c_str());
                
                DataOutFaces<dim> data_out_face(false);
                
                std::vector<std::string> face_names;
                
                for (unsigned int d=0; d<dim; ++d)
                    face_names.push_back ("vector");
                
                std::vector<DataComponentInterpretation::DataComponentInterpretation> face_component_type;
                
                for (unsigned int d=0; d<dim; ++d)
                    face_component_type.push_back (DataComponentInterpretation::component_is_part_of_vector);
                
                data_out_face.add_data_vector (dof_handler,
                                               locally_relevant_sol,
                                               face_names,
                                               face_component_type);
                
                data_out_face.build_patches (2);
                data_out_face.write_vtu (face_output);
            }

        }
        
    }
    catch (std::exception &exc)
    {
        std::cerr << std::endl << std::endl
        << "----------------------------------------------------"
        << std::endl;
        std::cerr << "Exception on processing: " << std::endl
        << exc.what() << std::endl
        << "Aborting!" << std::endl
        << "----------------------------------------------------"
        << std::endl;
        return 1;
    }
    catch (...)
    {
        std::cerr << std::endl << std::endl
        << "----------------------------------------------------"
        << std::endl;
        std::cerr << "Unknown exception!" << std::endl
        << "Aborting!" << std::endl
        << "----------------------------------------------------"
        << std::endl;
        return 1;
    }
    
    return 0;
}

Reply via email to