Hi Wolfgang,

 I think either I didn't understand the function document and your email 
correctly, or I didn't state it clearly.  I compromised it by reading it in 
as a normal triangulation, i.e. not distributed. As we are trying to solve 
larger-scale geophysics problems, we encounter some limitations so I come 
back to this post.  I assume many applications can benefit from it.

What we want:
There is a distributed refined mesh1 with defined parameters. This mesh has 
been saved as *.vtu file, with 'serialize_triangulation=true'.
And we want to read it in from the vtu file as a distributed triangulation. 

What we have done:
As you suggested, we attached the distributed triangulation to the gridin 
and used read_vtu(), the n_global_active_cells() is however, 0.  I have 
attached the test snippet here. Hopefully, you or someone else will have 
the time to correct my understanding of this function.

Many thanks,
Longying



On Thursday, 18 January 2024 at 03:57:08 UTC+2 Wolfgang Bangerth wrote:

>
> On 12/23/23 08:51, LY XXXiao wrote:
> > 
> > I am trying to use two sets of mesh for different parts of the problem. 
> > Mesh 1 (triangulation) is refined outside the program but read through 
> > read_vtu() function, which means that it has to be a serial 
> > triangulation. The mesh 2 is further refined from Mesh 1 for the heavy 
> > computation (and therefore has to be in distributed mode).
> > 
> > Do you have any smart solution to convert the refined serial 
> > triangulation to a distributed one ?
>
> Longying:
> our apologies for letting your question sit for so long.
>
> It is not true if you read a mesh via GridIn that the triangulation has 
> to be sequential. You can attach a parallel::distributed::Triangulation 
> to a GridIn object, and it will happily create a parallel (coarse) 
> triangulation for you that you can then refine as often as you want.
>
> Best
> W.
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/dealii/d6a87b1f-6283-4855-9c53-a09e53e15826n%40googlegroups.com.
#include <deal.II/base/conditional_ostream.h>
#include <deal.II/base/mpi.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/grid/grid_in.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/grid/grid_generator.h>

#include <string>
#include <vector>
#include <utility>

#include <fstream>
#include <iostream>
#include <sstream>

using namespace dealii;
                         
int main(int argc, char *argv[]) {

        Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
        MPI_Comm mpi_communicator(MPI_COMM_WORLD);

        parallel::distributed::Triangulation<3> triangulation0 ( 
mpi_communicator, 
                                typename Triangulation<3>::MeshSmoothing ( 
Triangulation<3>::limit_level_difference_at_vertices ) ); 

        std::cout << "\tGenerating coarse mesh... \n"<< std::endl;
        GridGenerator::hyper_cube(triangulation0);

        std::cout << "\tCopy the coarse mesh triangulation ... \n"<< std::endl;
        parallel::distributed::Triangulation<3> triangulation1 ( 
mpi_communicator, 
                                typename Triangulation<3>::MeshSmoothing ( 
Triangulation<3>::limit_level_difference_at_vertices ) ); 
        triangulation1.copy_triangulation(triangulation0);

        // // Refine the coarse mesh
        // std::cout << "\tRefining ... \n"<< std::endl;
        // triangulation1.refine_global(3);

        // write out mesh *tria and *.vtu files
        std::cout << "\tWrite to triangulation file ... \n"<< std::endl;
        const std::string filename = "triangulation_file.tria" ;
        std::ofstream out0(filename);
        triangulation1.save(filename);
                
        const std::string filename_vtu = "mesh.vtu" ;
        std::ofstream out1(filename_vtu);
    GridOut       grid_out;
        GridOutFlags::Vtu flag;
        flag.serialize_triangulation = true ;
        grid_out.set_flags(flag);  
    grid_out.write_vtu(triangulation1, out1);

        // Gridin read_vtu
        Triangulation<3> triangulation_gridin_seq;
        GridIn<3> grid_in_seq;
        grid_in_seq.attach_triangulation(triangulation_gridin_seq);
        {
        std::ifstream file(filename_vtu);
        if (!file.is_open()) {
                std::cerr << "Error opening file: " << filename_vtu << 
std::endl;               // Handle the error and return or exit gracefully
        }
        grid_in_seq.read_vtu(file);
        std::cout << "\tTriangulation gridin reading vtu file with 
cells:"<<triangulation_gridin_seq.n_global_active_cells()<< std::endl;
        file.close();
        }

        parallel::distributed::Triangulation<3> triangulation_gridin ( 
mpi_communicator);
        GridIn<3> grid_in_distri;
        grid_in_distri.attach_triangulation(triangulation_gridin);
        {
    std::ifstream file1(filename_vtu);
        if (!file1.is_open()) {
                std::cerr << "Error opening file: " << filename_vtu << 
std::endl;               // Handle the error and return or exit gracefully
        }
    grid_in_distri.read_vtu(file1);
        std::cout << "\tDistributed triangulation gridin reading vtu file with 
cells:"<<triangulation_gridin.n_global_active_cells()<< std::endl;
        file1.close();
    }

        std::cout << "Done." << std::endl;
        return 0;
}

Reply via email to