Re: Why I fail so bad to check for memory leak with this code?

2022-07-22 Thread Barry



> On 21 Jul 2022, at 21:54, Marco Sulla  wrote:
> On Thu, 21 Jul 2022 at 22:28, MRAB  wrote:
>> 
>> It's something to do with pickling iterators because it still occurs
>> when I reduce func_76 to:
>> 
>> @trace
>> def func_76():
>> pickle.dumps(iter([]))
> 
> It's too strange. I found a bunch of true memory leaks with this
> decorator. It seems to be reliable. It's correct with pickle and with
> iter, but not when pickling iters.

With code as complex as python’s there will be memory allocations that occur 
that will not be directly related to the python code you test.

To put it another way there is noise in your memory allocation signal.

Usually the signal of a memory leak is very clear, as you noticed.

For rare leaks I would use a tool like valgrind.

Barry

> -- 
> https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list


OT: Computer vision

2022-07-22 Thread GB
I'm looking for some help getting started with a computer vision 
project. Can anyone here either help or point me in the direction of a 
better NG/forum, please?



--
https://mail.python.org/mailman/listinfo/python-list


Re: Why I fail so bad to check for memory leak with this code?

2022-07-22 Thread Marco Sulla
On Fri, 22 Jul 2022 at 09:00, Barry  wrote:
> With code as complex as python’s there will be memory allocations that
occur that will not be directly related to the python code you test.
>
> To put it another way there is noise in your memory allocation signal.
>
> Usually the signal of a memory leak is very clear, as you noticed.
>
> For rare leaks I would use a tool like valgrind.

Thank you all, but I needed a simple decorator to automatize the memory
leak (and segfault) tests. I think that this version is good enough, I hope
that can be useful to someone:

def trace(iterations=100):
def decorator(func):
def wrapper():
print(
f"Loops: {iterations} - Evaluating: {func.__name__}",
flush=True
)

tracemalloc.start()

snapshot1 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)

for i in range(iterations):
func()

gc.collect()

snapshot2 = tracemalloc.take_snapshot().filter_traces(
(tracemalloc.Filter(True, __file__), )
)

top_stats = snapshot2.compare_to(snapshot1, 'lineno')
tracemalloc.stop()

for stat in top_stats:
if stat.count_diff * 100 > iterations:
raise ValueError(f"stat: {stat}")

return wrapper

return decorator


If the decorated function fails, you can try to raise the iterations
parameter. I found that in my cases sometimes I needed a value of 200 or 300
-- 
https://mail.python.org/mailman/listinfo/python-list