> On Jun 29, 2018, at 9:33 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:
> 
> 
> 
>> 22. 6. 2018 v 17:47, Smith, Barry F. <bsm...@mcs.anl.gov>:
>> 
>> 
>> 
>>> On Jun 22, 2018, at 5:43 AM, Pierre Jolivet <pierre.joli...@enseeiht.fr> 
>>> wrote:
>>> 
>>> Hello,
>>> I’m solving a system using a MATSHELL and PCGAMG.
>>> The MPIAIJ Mat I’m giving to GAMG has a specific structure (inherited from 
>>> the MATSHELL) I’d like to exploit during the solution phase when the 
>>> smoother on the finest level is doing MatMults.
>>> 
>>> Is there some way to:
>>> 1) decouple in -log_view the time spent in the MATSHELL MatMult and in the 
>>> smoothers MatMult
>> 
>>  You can register a new event and then inside your MATSHELL MatMult() call 
>> PetscLogEventBegin/End on your new event.
>> 
>>   Note that the MatMult() like will still contain the time for your MatShell 
>> mult so you will need to subtract it off to get the time for your non-shell 
>> matmults.
> 
> In PERMON, we sometimes have quite complicated hierarchy of wrapped matrices 
> and want to measure MatMult{,Transpose,Add,TransposeAdd} separately for 
> particular ones. Think e.g. of having additive MATCOMPOSITE wrapping 
> multiplicative MATCOMPOSITE wrapping MATTRANSPOSE wrapping MATAIJ. You want 
> to measure this MATAIJ instance's MatMult separately but you surely don't 
> want to rewrite implementation of MatMult_Transpose or force yourself to use 
> MATSHELL just to hang the events on MatMult*.
> 
> We had a special wrapper type just adding some prefix to the events for the 
> given object but this is not nice. What about adding a functionality to 
> PetscLogEventBegin/End that would distinguish based on the first 
> PetscObject's name or option prefix? Of course optionally not to break guys 
> relying on current behavior - e.g. under something like -log_view_by_name. To 
> me it's quite an elegant solution working for any PetscObject and any event.

   This could get ugly real fast, for example, for vector operations, there may 
be dozens of named vectors and each one gets its own logging? You'd have to 
make sure that only the objects you care about get named, is that possible?

    I don't know if there is a good solution within the PETSc logging 
infrastructure to get what you want but maybe what you propose is the best 
possible.

   Barry

> 
> I can do that if I get some upvotes.
> 
> Vaclav
> 
>> 
>>> 2) hardwire a specific MatMult implementation for the smoother on the 
>>> finest level
>> 
>>  In the latest release you do MatSetOperation() to override the normal 
>> matrix vector product with anything else you want. 
>> 
>>> 
>>> Thanks in advance,
>>> Pierre
>>> 
>>> PS : here is what I have right now,
>>> MatMult              118 1.0 1.0740e+02 1.6 1.04e+13 1.6 1.7e+06 6.1e+05 
>>> 0.0e+00 47100 90 98  0  47100 90 98  0 81953703
>>> […]
>>> PCSetUp                2 1.0 8.6513e+00 1.0 1.01e+09 1.7 2.6e+05 4.0e+05 
>>> 1.8e+02  5  0 14 10 66   5  0 14 10 68 94598
>>> PCApply               14 1.0 8.0373e+01 1.1 9.06e+12 1.6 1.3e+06 6.0e+05 
>>> 2.1e+01 45 87 72 78  8  45 87 72 78  8 95365211 // I’m guessing a lot of 
>>> time here is being wasted in doing inefficient MatMults on the finest level 
>>> but this is only speculation
>>> 
>>> Same code with -pc_type none -ksp_max_it 13,
>>> MatMult               14 1.0 1.2936e+01 1.7 1.35e+12 1.6 2.0e+05 6.1e+05 
>>> 0.0e+00 15100 78 93  0  15100 78 93  0 88202079
>>> 
>>> The grid itself is rather simple (two levels, extremely aggressive 
>>> coarsening),
>>>  type is MULTIPLICATIVE, levels=2 cycles=v
>>>  KSP Object: (mg_coarse_) 1024 MPI processes
>>> linear system matrix = precond matrix:
>>>    Mat Object: 1024 MPI processes
>>>      type: mpiaij
>>>      rows=775, cols=775
>>>      total: nonzeros=1793, allocated nonzeros=1793
>>> 
>>> linear system matrix followed by preconditioner matrix:
>>> Mat Object: 1024 MPI processes
>>>  type: shell
>>>  rows=1369307136, cols=1369307136
>>> Mat Object: 1024 MPI processes
>>>  type: mpiaij
>>>  rows=1369307136, cols=1369307136
>>>  total: nonzeros=19896719360, allocated nonzeros=19896719360

Reply via email to