On Tue, Jul 23, 2024 at 8:02 PM Matthew Thomas <[email protected]>
wrote:
> Hello Matt,
>
> I have attached the output with mat_view for 8 and 40 processors.
>
> I am unsure what is meant by the matrix communicator and the partitioning.
> I am using the default behaviour in every case. How can I find this
> information?
>
This shows that the matrix is taking the same amount of memory for 8 and 40
procs, so that is not your problem. Also,
it is a very small amount of memory:
100K rows x 3 nz/row x 8 bytes/nz = 2.4 MB
and 50% overhead for indexing, so something under 4MB. I am not sure what
is taking up the rest of the memory, but I do not
think it is PETSc from the log you included.
Thanks,
Matt
> I have attached the log view as well if that helps.
>
> Thanks,
> Matt
>
>
>
>
> On 23 Jul 2024, at 9:24 PM, Matthew Knepley <[email protected]> wrote:
>
> You don't often get email from [email protected]. Learn why this is
> important
> <https://urldefense.us/v3/__https://aka.ms/LearnAboutSenderIdentification__;!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW0d23Zev$
> >
> Also, you could run with
>
> -mat_view ::ascii_info_detail
>
> and send the output for both cases. The storage of matrix values is not
> redundant, so something else is
> going on. First, what communicator do you use for the matrix, and what
> partitioning?
>
> Thanks,
>
> Matt
>
> On Mon, Jul 22, 2024 at 10:27 PM Barry Smith <[email protected]> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
>
> Send the code.
>
> On Jul 22, 2024, at 9:18 PM, Matthew Thomas via petsc-users <
> [email protected]> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> Hello,
>
> I am using petsc and slepc to solve an eigenvalue problem for sparse
> matrices. When I run my code with double the number of processors, the memory
> usage also doubles.
>
> I am able to reproduce this behaviour with ex1 of slepc’s hands on exercises.
>
> The issue is occurring with petsc not with slepc as this still occurs when I
> remove the solve step and just create and assemble the petsc matrix.
>
> With n=100000, this uses ~1Gb with 8 processors, but ~5Gb with 40 processors.
>
> This was done with petsc 3.21.3, on linux compiled with Intel using Intel-MPI
>
> Is this the expected behaviour? If not, how can I bug fix this?
>
>
> Thanks,
> Matt
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW-rDI1i4$
>
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW9-ZqDyD$
> >
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW-rDI1i4$
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!b_JFRb7MxmdPHCjjuC42vps0Cvkz5tuUTRRK-Yh20xdmpvEHr2guqznV0TGVXhEiNnXVEZeCCPSlW9-ZqDyD$
>