Probably this shares the same bug as [1]
Calling `NumberField().class_group().order()` in a loop of size N:
#10^3 leaks: 40.03 MB 40026112 pari= [7950, 1451665]
#10^4 leaks: 338.49 MB 338493440 pari= [83505, 19297360]
The leak appears to be in the pari heap.
Code .sage:
#Author Georgi G
In short:
```
for A2 in range(1, 10**5):
E=EllipticCurve([A2,0])
rn=E.root_number()
```
leaks nearly 128MB of memory on sage 10.4
The same code in pari passes with very little memory.
This is related to the following problem in algebraic geometry [1]
Let $k,k_1,k_2$ be squarefree pairwise
The following leaks for me on 9.6
```
from sage.all import ZZ
for _ in range(10**8):
try: a=ZZ(10)**(2**61)
except: pass
```
In addition it prints on stderr `gmp: overflow in mpz type`
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
Two users try to use 51% of the memory simultaneously.
On Fri, Jul 7, 2023 at 11:01 AM Nils Bruin wrote:
> On Friday, 7 July 2023 at 07:53:22 UTC-7 Edgar Costa wrote:
>
> I'm okay with a user using 90% of the ram, if that becomes an issue, I can
> always email them or kill their process, but mor
On Friday, 7 July 2023 at 07:53:22 UTC-7 Edgar Costa wrote:
I'm okay with a user using 90% of the ram, if that becomes an issue, I can
always email them or kill their process, but more often than not, until I
started to use earlyoom is that the memory usage slowly creeps to 100% and
the culprit
Nils,
This is my recommendation to avoid my worst-case scenario, where someone
must go to some far away basement and power cycle the server manually after
waiting a couple of days for OOM kicking in.
I'm okay with a user using 90% of the ram, if that becomes an issue, I can
always email them or ki
Simpler testcase is to replace `C = sqrt(T2)`
with `C=SR(int(2)).sqrt()`
Both int() and sqrt() appear necessary, sin() doesn't leak for me.
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails fr
On 2023-07-06 09:16:46, Nils Bruin wrote:
> > On Wednesday, 5 July 2023 at 08:29:44 UTC-7 Edgar Costa wrote:
> >
> > Hi Gonzalo,
> >
> > I highly recommend using https://github.com/rfjakob/earlyoom instead of
> > waiting for OOM to kick in.
>
> Wouldn't setting ulimit with -m (memory) or -v (vi
On Wednesday, 5 July 2023 at 08:29:44 UTC-7 Edgar Costa wrote:
Hi Gonzalo,
I highly recommend using https://github.com/rfjakob/earlyoom instead of
waiting for OOM to kick in.
Wouldn't setting ulimit with -m (memory) or -v (virtual memory) for the
process that is liable to exceed its memory quo
Hi Gonzalo,
I highly recommend using https://github.com/rfjakob/earlyoom instead of
waiting for OOM to kick in.
Cheers,
Edgar
On Wed, Jul 5, 2023 at 11:24 AM Gonzalo Tornaria wrote:
> This slowly and inexorably goes on. Computing `sqrt(T2)` leaks 32 bytes
> each and every time (asymptotically)
This slowly and inexorably goes on. Computing `sqrt(T2)` leaks 32 bytes
each and every time (asymptotically).
Found by a student who, through no fault of himself, brought down our
server (unable to ssh in until the OOM triggered -- but since the leak is
slow it takes a while to trash 16G of swa
Dear all,
there seems to be a memory leak in canonical_label(...), using bliss.
Here is a test script to demonstrate the problem:
-
import os, psutil
from sage.all import *
process = psutil.Process(os.getpid())
oldmem = process.memory_info().rss
for i in range(100):
G
Dear all
[I attempted to post this a few days ago but seemingly failed, so this is a
repost; apologies if duplicate.]
The following program appears to consume all the memory on my machine:
n = 1000
for i in range(0, 100):
_ = identity_matrix(n).change_ring(GF(101))
print(get_memory_u
On Thu, Mar 18, 2021 at 8:29 PM Volker Braun wrote:
>
> This is presumably the same memory leak as in
> https://trac.sagemath.org/ticket/31340
yes, and it can be fixed by just removing custom memory allocators in ZZ.
>
> On Thursday, March 18, 2021 at 7:21:14 PM UTC+1 m.derick...@gmail.com wro
This is presumably the same memory leak as in
https://trac.sagemath.org/ticket/31340
On Thursday, March 18, 2021 at 7:21:14 PM UTC+1 m.derick...@gmail.com wrote:
> Hi Vincent,
>
> Thanks for testing and good that you realized that indeed looked fishy
> indeed.
>
> I think your extension of my c
Hi Vincent,
Thanks for testing and good that you realized that indeed looked fishy
indeed.
I think your extension of my code with multiple loops actually confirms
that there is a problem even after warming up:
the prints:
memory usage 30k: 1771.8046875
memory usage 40k: 1771.80859375
mean th
Sorry, your example indeed looks fishy.
Le 18/03/2021 à 17:36, Vincent Delecroix a écrit :
Maarten, in your example your can not fairly compare the first
and second run. For example R is not allocated at your first
call of "mem = get_memory_usage()". Also Integers have a pool
which is likely not
Maarten, in your example your can not fairly compare the first
and second run. For example R is not allocated at your first
call of "mem = get_memory_usage()". Also Integers have a pool
which is likely not being filled at startup. Ignoring the first
run, I do not notice any difference.
(both on s
Hi Guys,
Thanks for the replies. I think this is enough info to know that it happens
on multiple systems and that it's not just the cocalc enhanced version of
sage. I have created:
https://trac.sagemath.org/ticket/31511#ticket for this.
In the meantime I found the problem is already with the sr
On MacBook Air Mid 2012, with Sage 9.2,
39
0
memory usage 10k: 5.0
0
memory usage 20k: 11.0
On Thursday, March 18, 2021 at 10:01:56 AM UTC-5 David Joyner wrote:
> On Thu, Mar 18, 2021 at 6:56 AM Maarten Derickx
> wrote:
>
>> Hi All,
>>
>> tldr: the bottom of this post contains example code
On Thu, Mar 18, 2021 at 6:56 AM Maarten Derickx
wrote:
> Hi All,
>
> tldr: the bottom of this post contains example code of which I would like
> the results on some other systems.
>
> I recently encountered a memory leak in the relatively innocently looking
> code:
>
> d = 27
> M = 109
> for i in
Hi All,
tldr: the bottom of this post contains example code of which I would like
the results on some other systems.
I recently encountered a memory leak in the relatively innocently looking
code:
d = 27
M = 109
for i in range(1):
for a in srange(M):
for r in srange(d):
This shows a leak:
i = 0
for P in Posets(8):
if i % 1000 == 0:
gc.collect()
print get_memory_usage()
i += 1
_ = P.dimension()
To compare, width() and height() does not seem to leak.
--
Jori Mäntysalo
Here is a bit of code:
# foo.py
def dot_prod(v, w):
return sum(x*y for (x, y) in zip(v, w))
def get_polytope(M):
q = MixedIntegerLinearProgram( maximization = False, solver = 'Coin' )
w = q.new_variable(real = True, nonnegative = True)
for v in M.rows():
q.add_constraint
See the report at
http://ask.sagemath.org/question/32920/memory-saturation-when-i-test-equalities-in-symbolic-ring/
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email
t
Thanks Christian!
More specifically, it only concerns the NTL implementation backend. Here
is a more direct example
sage: R = PolynomialRing(ZZ, 'x', implementation='NTL')
sage: x = R.gen()
sage: p = x**2 - 3
sage: for _ in range(1): a = p(2)
sage: resource.getrusage(resource.RUSAGE_SELF).
I also found valgrind not very helpful here, but good old
code-dissection leads me to believe that the problem might originate in
the polynomial evaluation in the _richcmp_ routine in
src/sage/rings/number_field/number_field_element.pyx
That's because the following code shows the same leak
Hello,
Some friend just sent an e-mail to me mentioning a memory leak. Here is
a minimal example
sage: x = polygen(ZZ)
sage: K = NumberField(x**3 - 2, 'cbrt2', embedding=RR(1.2599))
sage: w = K.gen()
sage: import resource
sage: resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
180720
sage: fo
I did P6=Posets(6).list() and then for 4 time
%timeit
for P in P6:
g=P.hasse_diagram()
and got
CPU time: 28.67 s, Wall time: 33.25 s
CPU time: 35.25 s, Wall time: 38.82 s
CPU time: 42.67 s, Wall time: 46.22 s
CPU time: 52.55 s, Wall time: 56.22 s
Anybody have an idea about what is happ
I was dabbling in arithmetic when I noticed that the following leaks memory
at about a megabyte per second:
F. = GF(11^2, 'a')
R. = F[]
while True:
_ = x(a, a)
Now I would have thought that evaluating polynomials is a fairly common
operation, so I'm sure somebody noticed thi
Hey Sage-devel!
I've been experiencing memory issues dealing with Polyhedron objects for a
while now... Perhaps it is time to look if something could be done.
Here is a simple code reproducing the (what I believe to be a) memory leak.
First, I use the garbage collector to force the cleaning of
I was playing with more complex functions and found a memory leak. I have
been trying to figure out where is it, and here is result so far:
def foo(L, s):
return sum( [L.mobius_function(y, L.top())/prod(s[x] for x in
L.interval(L.bottom(), y)) for y in s.keys()] )
S.=QQ[]
def bar(L, x):
Hello,
I've had some issues recently with PolyBoRi. I'm fairly certain that the
code computing an ideal contains a memory leak. Part of my code is as
follows. Here T is a data structure storing an nxnxm tensor over GF(2) as a
list of nxn matrices, and vec is a vector of length m over GF(2).
n, m
If you run the following code, you'll see your memory usage steadily
go up:
alg='default'
n,m=10,22
B=20
while True:
_=matrix(QQ,n,m,[randint(-B,B)/randint(1,B) for i in
range(n*m)]).echelon_form(alg)
The same happens with alg='padic' and alg='multimodular' (I think the
latter amounts to the
Hi,
I've discovered something kind of interesting - in Sage 5.0, 5.1.beta0,
and 5.1.beta1, there seems to be a memory leak somewhere in docstring
lookup. Try opening sage in a terminal and top in another terminal. I
see the following results:
Open Sage --> RES = 102 MB
Type "str?" --> RES =
I just found the following memory leak:
def leak():
K. = NumberField(x^2 - x - 1)
m = get_memory_usage()
for n in xrange(10):
E = EllipticCurve(K, [3,4,5,6,7])
if n % 1000 == 0:
print get_memory_usage() - m
sage: leak()
0.0
0.5
1.0
1.0
1.5
2.0
2.0
2.5
3
Dear all,
There seems to be a memory leak in some code below, in at least versions
4.5 and 4.5.3. For example, if I call it with
sage: L = find_candidates_for_large_value(5000)
It prints something like:
current memory usage: 836.73046875
current memory usage: 836.73046875
current memory usage:
Consider the following script, which saves a p-adic matrix and then
repeatedly loads it into a list:
==
from time import time
K = Qp(13, 10)
M = Matrix(K, [[K.random_element() for j in range(200)] for k in range(200)])
M.save("thing.sobj")
L = []
for i in range(40
Hi!
I did the following:
sage: M = Matrix(GF(17),[[1,2],[1,1]])
sage: N = Matrix(GF(17),[[2,2],[2,1]])
sage: G = MatrixGroup([M,N])
sage: for g in G:
: print get_memory_usage()
:
It turned out that the memory consumption slowly but constantly
increased.
Is this a known problem?
Note
I am forwarding this from sage-support because it seems like it might
be a serious problem.
-Marshall
-- Forwarded message --
From: Yann
Date: Mar 4, 5:49 pm
Subject: Why does my little program bring my department's server to
its knees?
To: sage-support
sage: R.=QQ[]
sage: whi
40 matches
Mail list logo