Hi Matthew,

we have our own format that uses MPI I/O for the initial read, then we would like to do almost exactly what we do in ex47.c (https://urldefense.us/v3/__https://petsc.org/main/src/dm/impls/plex/tests/ex47.c.html__;!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfu5bKTR_$ ) excepted the very beginning of the program that will read (MPI I/O) from the disk.  Then, always in parallel:

1- Populate a DMPlex with multiple element types (with a variant of DMPlexBuildFromCellListParallel ? do you have an example of this?)

2- Call partitioning (DMPlexDistribute)

3- Compute overlap (DMPlexDistributeOverlap)

4- Also compute the corresponding mapping between original element numbers and partitonned+overlap elements ( DMPlexNaturalToGlobalBegin/End)

The main point here here is overlap computation.  And the big challenge is that we must always rely on the fact that never, ever, any node read all the mesh: all nodes have only a small part of it at the beginning then we want parallel partitioning and overlapping computation...

It is now working fine for a mesh with a single type of element, but if we can modify ex47.c with an example of a mixed element types that will achieve what we would like to do!

Thanks,

Eric


On 2024-07-31 22:09, Matthew Knepley wrote:
On Wed, Jul 31, 2024 at 4:16 PM Eric Chamberland <[email protected]> wrote:

    Hi Vaclav,

    Okay, I am coming back with this question after some time... ;)

    I am just wondering if it is now possible to call
    DMPlexBuildFromCellListParallel or something else, to build a mesh
    that combine different element types into a single DMPlex (in
    parallel of course) ?

1) Meshes with different cell types are fully functional, and some applications have been using them for a while now.

2) The Firedrake I/O methods support these hybrid meshes.

3) You can, for example, read in a GMsh or ExodusII file with different cell types.

However, there is no direct interface like DMPlexBuildFromCellListParallel(). If you plan on creating meshes by hand, I can build that for you. No one so far has wanted that. Rather they want to read in a mesh in some format, or alter a base mesh by inserting other cell types.

So, what is the motivating use case?

  Thanks,

     Matt

    Thanks,

    Eric

    On 2021-09-23 11:30, Hapla Vaclav wrote:
    Note there will soon be a generalization of
    DMPlexBuildFromCellListParallel() around, as a side product of
    our current collaborative efforts with Firedrake guys. It will
    take a PetscSection instead of relying on the blocksize [which is
    indeed always constant for the given dataset]. Stay tuned.

https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/4350__;!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfvLIunXK$
    Thanks,

    Vaclav

    On 23 Sep 2021, at 16:53, Eric Chamberland
    <[email protected]> wrote:

    Hi,

    oh, that's a great news!

    In our case we have our home-made file-format, invariant to the
    number of processes (thanks to MPI_File_set_view), that uses
    collective, asynchronous MPI I/O native calls for unstructured
    hybrid meshes and fields .

    So our needs are not for reading meshes but only to fill an
    hybrid DMPlex with DMPlexBuildFromCellListParallel (or something
    else to come?)... to exploit petsc partitioners and parallel
    overlap computation...

    Thanks for the follow-up! :)

    Eric


    On 2021-09-22 7:20 a.m., Matthew Knepley wrote:
    On Wed, Sep 22, 2021 at 3:04 AM Karin&NiKo
    <[email protected]> wrote:

        Dear Matthew,

        This is great news!
        For my part, I would be mostly interested in the parallel
        input interface. Sorry for that...
        Indeed, in our application, we already have a parallel mesh
        data structure that supports hybrid meshes with parallel
        I/O and distribution (based on the MED format). We would
        like to use a DMPlex to make parallel mesh adaptation.
         As a matter of fact, all our meshes are in the MED format.
        We could also contribute to extend the interface of DMPlex
        with MED (if you consider it could be usefull).


    An MED interface does exist. I stopped using it for two reasons:

      1) The code was not portable and the build was failing on
    different architectures. I had to manually fix it.

      2) The boundary markers did not provide global information,
    so that parallel reading was much harder.

    Feel free to update my MED reader to a better design.

      Thanks,

         Matt

        Best regards,
        Nicolas


        Le mar. 21 sept. 2021 à 21:56, Matthew Knepley
        <[email protected]> a écrit :

            On Tue, Sep 21, 2021 at 10:31 AM Karin&NiKo
            <[email protected]> wrote:

                Dear Eric, dear Matthew,

                I share Eric's desire to be able to manipulate
                meshes composed of different types of elements in a
                PETSc's DMPlex.
                Since this discussion, is there anything new on
                this feature for the DMPlex object or am I missing
                something?


            Thanks for finding this!

            Okay, I did a rewrite of the Plex internals this
            summer. It should now be possible to interpolate a mesh
            with any
            number of cell types, partition it, redistribute it,
            and many other manipulations.

            You can read in some formats that support
            hybrid meshes. If you let me know how you plan to read
            it in, we can make it work.
            Right now, I don't want to make input interfaces that
            no one will ever use. We have a project, joint with
            Firedrake, to finalize
            parallel I/O. This will make parallel reading and
            writing for checkpointing possible, supporting
            topology, geometry, fields and
            layouts, for many meshes in one HDF5 file. I think we
            will finish in November.

              Thanks,

                 Matt

                Thanks,
                Nicolas

                Le mer. 21 juil. 2021 à 04:25, Eric Chamberland
                <[email protected]> a écrit :

                    Hi,

                    On 2021-07-14 3:14 p.m., Matthew Knepley wrote:
                    On Wed, Jul 14, 2021 at 1:25 PM Eric
                    Chamberland <[email protected]>
                    wrote:

                        Hi,

                        while playing with
                        DMPlexBuildFromCellListParallel, I noticed
                        we have to
                        specify "numCorners" which is a fixed
                        value, then gives a fixed number
                        of nodes for a series of elements.

                        How can I then add, for example, triangles
                        and quadrangles into a DMPlex?


                    You can't with that function. It would be much
                    mich more complicated if you could, and I am
                    not sure
                    it is worth it for that function. The reason
                    is that you would need index information to
                    offset into the
                    connectivity list, and that would need to be
                    replicated to some extent so that all
                    processes know what
                    the others are doing. Possible, but complicated.

                    Maybe I can help suggest something for what
                    you are trying to do?

                    Yes: we are trying to partition our parallel
                    mesh with PETSc functions. The mesh has been
                    read in parallel so each process owns a part of
                    it, but we have to manage mixed elements types.

                    When we directly use ParMETIS_V3_PartMeshKway,
                    we give two arrays to describe the elements
                    which allows mixed elements.

                    So, how would I read my mixed mesh in parallel
                    and give it to PETSc DMPlex so I can use a
                    PetscPartitioner with DMPlexDistribute ?

                    A second goal we have is to use PETSc to
                    compute the overlap, which is something I can't
                    find in PARMetis (and any other partitionning
                    library?)

                    Thanks,

                    Eric



                      Thanks,

                    Matt

                        Thanks,

                        Eric

-- Eric Chamberland, ing., M. Ing
                        Professionnel de recherche
                        GIREF/Université Laval
                        (418) 656-2131 poste 41 22 42



-- What most experimenters take for granted
                    before they begin their experiments is
                    infinitely more interesting than any results
                    to which their experiments lead.
                    -- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfsLvKmAp$ <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfmYFl_DF$ >

-- Eric Chamberland, ing., M. Ing
                    Professionnel de recherche
                    GIREF/Université Laval
                    (418) 656-2131 poste 41 22 42



-- What most experimenters take for granted before they
            begin their experiments is infinitely more interesting
            than any results to which their experiments lead.
            -- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfsLvKmAp$ <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfmYFl_DF$ >



-- What most experimenters take for granted before they begin
    their experiments is infinitely more interesting than any
    results to which their experiments lead.
    -- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfsLvKmAp$ <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfmYFl_DF$ >
-- Eric Chamberland, ing., M. Ing
    Professionnel de recherche
    GIREF/Université Laval
    (418) 656-2131 poste 41 22 42

-- Eric Chamberland, ing., M. Ing
    Professionnel de recherche
    GIREF/Université Laval
    (418) 656-2131 poste 41 22 42



--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfsLvKmAp$ <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!cgLnoLq-w8YlD_y4ZrBQbY1i_SgBSKmVRFIOZU9rULyu9jowettkaC7Srlg-sjuHlrXIjItOOY-dgiXMDyfGE3fljVcPVgrTfmYFl_DF$ >

--
Eric Chamberland, ing., M. Ing
Professionnel de recherche
GIREF/Université Laval
(418) 656-2131 poste 41 22 42

Reply via email to