As Dima says, and as the issue he mentions supports, the current cypari2 
code which attempts to keep Pari Gens on the Pari stack as much as possible 
is badly broken.  There are many situations where Python Gen objects cannot 
be garbage-collected after being destroyed.  I am sure that is a big part 
of this problem.  But I don't think it is the whole story.

CyPari has returned to the older design which moves the Pari Gen wrapped by 
a Python Gen to the pari heap when the python object is created.  This 
eliminates the leaks reported in cypari2 issue #112.  But in this context, 
I am seeing 12 GB of memory (including several gigabytes of swap) in use 
after I do the following in ipython:

In [1]: from cypari import *
In [2]: def test(N):
   ...:     for a in range(1, N):
   ...:         e = pari.ellinit([a, 0])
   ...:         m = pari.ellrootno(e)
In [3]: %time test(10**5)
CPU times: user 699 ms, sys: 38.3 ms, total: 737 ms
Wall time: 757 ms
In [4]: %time test(10**6)
CPU times: user 7.47 s, sys: 392 ms, total: 7.86 s
Wall time: 7.93 s
In [5]: %time test(10**7)
CPU times: user 1min 41s, sys: 6.62 s, total: 1min 47s
Wall time: 1min 49s

- Marc

On Thursday, August 29, 2024 at 1:19:05 PM UTC-5 dim...@gmail.com wrote:

> It would be good to reproduce this with cypari2 alone.
> cypari2 is known to have similar kind (?) of problems:
> https://github.com/sagemath/cypari2/issues/112
>
>
> On Thu, Aug 29, 2024 at 6:47 PM Nils Bruin <nbr...@sfu.ca> wrote:
> >
> > On Thursday 29 August 2024 at 09:51:04 UTC-7 Georgi Guninski wrote:
> >
> > I observe that the following does not leak:
> >
> > E=EllipticCurve([5*13,0]) #no leak
> > rn=E.root_number()
> >
> >
> > How do you know that doesn't leak? Do you mean that repeated execution 
> of those commands in the same session does not swell memory use?
> >
> >
> > The size of the leak is suspiciously close to a power of two.
> >
> >
> > I don't think you can draw conclusions from that. Processes generally 
> request memory in large blocks from the operating system, to amortize the 
> high overhead in the operation. It may even be the case that 128 Mb is the 
> chunk size involved here! The memory allocated to a process by the 
> operating system isn't a fully accurate measure of memory allocation use in 
> the process either: a heap manager can decide it's cheaper to request some 
> new pages from the operating system than to reorganize its heap and reuse 
> the fragmented space on it. I think for this loop, memory allocation 
> consistently swells with repeated execution, so there probably really is 
> something leaking. But given that it's not in GC-tracked objects on the 
> python heap, one would probably need valgrind information or a keen look at 
> the code involved to locate where it's coming from.
> >
> > --
> > You received this message because you are subscribed to the Google 
> Groups "sage-devel" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to sage-devel+...@googlegroups.com.
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sage-devel/e63e2ec9-106a-4ddd-ab16-5c6db4fe83b4n%40googlegroups.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/8f309f36-5a39-4677-a137-60a724e0d970n%40googlegroups.com.

Reply via email to