Caching uses memory intentionally, for the purpose of speeding up computations.
Leaks use memory unintentionally, for no purpose. I don't know where the caching happens. I only deduce that it exists because, when running the same computation twice, the second time is faster. However, the caching does not happen in cypari2 (nor cypari). This is what I see with cypari2 in sage: sage: import cypari2 sage: pari = cypari2.Pari() sage: def test(N): ....: for a in range(1, N): ....: K = pari.bnfinit(pari("x^2 + %s" % a)) ....: m = K.bnf_get_no() ....: sage: %time test(10**3) CPU times: user 543 ms, sys: 29.4 ms, total: 572 ms Wall time: 630 ms sage: %time test(10**4) CPU times: user 7.14 s, sys: 48.4 ms, total: 7.18 s Wall time: 7.2 s sage: %time test(10**5) CPU times: user 2min 1s, sys: 854 ms, total: 2min 2s Wall time: 2min 2s That uses 190MB, not "GBs". The same computation with cypari in ipython is a bit faster but not much: In [1]: from cypari import pari In [2]: def test(N): ...: for a in range(1, N): ...: K = pari.bnfinit(pari("x^2 + %s" % a)) ...: m = K.bnf_get_no() In [3]: %time test(10**3) CPU times: user 410 ms, sys: 4.75 ms, total: 415 ms Wall time: 415 ms In [4]: %time test(10**4) CPU times: user 6.05 s, sys: 36.1 ms, total: 6.08 s Wall time: 6.09 s In [5]: %time test(10**5) CPU times: user 1min 51s, sys: 846 ms, total: 1min 52s Wall time: 1min 53s That computation uses 51MB. Also not "GBs". There is no question that computations which run entirely on the PARI stack are faster than computations which move each PARI GEN to the heap and wrap it in a python object. That is presumably the reason that the cypari2 project was trying to leave GENs on the stack as long as possible. Unfortunately, their implementation of that idea caused huge memory leaks. I think your complaints about the Sage NumberField class are not directly relevant to cypari or cypari2. Your observation that PARI runs faster than cypari or cypari2 applies to the design of Sage's PARI interface, which goes back to the beginning of Sage. I am sure that a better design would be welcomed, if you had one to offer. Any such interface would incur some cost, but maybe it would be possible to do better. - Marc On Thursday, September 5, 2024 at 2:00:42 AM UTC-6 Georgi Guninski wrote: On Wed, Sep 4, 2024 at 11:13 PM Marc Culler wrote: > > I think that here you are seeing caching taking place, rather than a memory leak. This is what I tried: > > You call this caching, I call it leak, it can be both ways. It is natural to compute the class numbers of QQ[sqrt(-n)] and it shouldn't takes GBs of RAM IMHO. Default pari is significantly faster with stack 40MB, is there drama nfinit vs bnfinit?: allocatemem(40*10^6); default(timer,1); {f(N)= for(a=1,N, K=bnfinit('x^2+a); m=K.clgp.no; ); } ? f(10^4) cpu time = 5,028 ms, real time = 5,057 ms. ? f(10^5) cpu time = 1min, 14,328 ms, real time = 1min, 15,146 ms. -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/9c7ff1d6-327a-4f7a-98ea-5e1dfd45c0d1n%40googlegroups.com.