Brian Granger wrote: Hi Brian,
>> Well, in the end you end up using sbrk() anyway, but I don't see what is >> wrong with malloc itself? sage_malloc was introduced a while back to >> make it possible to switch to a slab allocator like omalloc potentially >> to see if there is any benefit from it. > > And that makes sense. > > >> Absolutely not, but if you want to write exception safe extensions just >> write them in exception safe C++ using autoptr & friends. While you >> might not be too concerned about performance here the issue is still >> debuggability. If you ran standard Python under valgrind (after >> disabling pymalloc) you will get > > I do think performance is important, but not at the expense of > potential memory leaks. Sure, we want both. > I don't have experience running valgrind with > python, but from what I have gleaned from others, you need to run > valgrind with valgrind-python.supp that is in the Misc directory of > the Python source tree. Details are here: > > http://svn.python.org/projects/python/trunk/Misc/README.valgrind > > My impression from others is that the memory problems you are seeing > here will go away if you use this .supp file. Not sure though. No, they are not. Suppressing still reachable memory doesn't make the problem go away. It is just a cosmetic solution and hides real bugs. > I do > know that the python-devs use valgrind to detect real memory leaks and > there is _no_ way that they actually have thousands of them. Well, those aren't technically leaks, but memory that is not properly deallocated at exit. The amount is more or less constant independent on how long you run a python session. The amount usually grows once you import more modules and it is a bug in my book if you do not properly dealloc all memory and let the heap tear down at the program exit take care of it. What happens is that if you do not free a reference for some piece of memory and you do that repeatedly in your code you end up with a lot of stale memory chunks that all get reaped at exit as still reachable. And that is a very real problem if you chose to ignore those chunks since the system frees them at python's exit anyway. I have found a bug like that for example in Singular where the slab allocator did hide the problem, so this is a real issue. >> [EMAIL PROTECTED]:/scratch/mabshoff/release-cycle/sage-3.0.alpha2/local/bin$ >> valgrind --tool=memcheck --leak-resolution=high ./python >> ==12347== Memcheck, a memory error detector. >> ==12347== Copyright (C) 2002-2008, and GNU GPL'd, by Julian Seward et al. >> ==12347== Using LibVEX rev 1812, a library for dynamic binary translation. >> ==12347== Copyright (C) 2004-2008, and GNU GPL'd, by OpenWorks LLP. >> ==12347== Using valgrind-3.4.0.SVN, a dynamic binary instrumentation >> framework. >> ==12347== Copyright (C) 2000-2008, and GNU GPL'd, by Julian Seward et al. >> ==12347== For more details, rerun with: -v >> ==12347== >> Python 2.5.1 (r251:54863, Apr 6 2008, 21:59:15) >> [GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> >> ==12347== >> ==12347== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 9 from 2) >> ==12347== malloc/free: in use at exit: 599,274 bytes in 2,441 blocks. >> ==12347== malloc/free: 12,890 allocs, 10,449 frees, 2,420,712 bytes >> allocated. >> ==12347== For counts of detected errors, rerun with: -v >> ==12347== searching for pointers to 2,441 not-freed blocks. >> ==12347== checked 998,864 bytes. >> ==12347== >> ==12347== LEAK SUMMARY: >> ==12347== definitely lost: 0 bytes in 0 blocks. >> ==12347== possibly lost: 15,736 bytes in 54 blocks. >> ==12347== still reachable: 583,538 bytes in 2,387 blocks. >> ==12347== suppressed: 0 bytes in 0 blocks. >> >> So we have roughly 2,400 blocks of memory that Python itself cannot >> properly deallocate due to problems with their own garbage collection. > > See above. > >> I >> will spare you the numbers for Sage (they are much worst due to still to >> be solved problems with Cython and extensions in general) and starting >> to poke for leaks in that pile of crap is not something I find >> appealing. Sure, once all mallocs in Sage are converted to sage_malloc >> one could switch and see what happens, but I can guarantee you that >> debugging some problem stemming from us doing something stupid with >> malloc compared to tracking it down inside Python is not a contest. >> Check out #1337 in our track to see such a case. > > True, malloc is more straightforward in that sense. > >> And by the way: The python interpreter itself does leak some small bits >> of memory while running. So the above while it looks like a really good >> idea is far from a fool proof solution. > > Is your argument: there are already lots of memory leaks in python and > sage so a few more is not a big deal? No: My argument is that the solution you suggested is not something that will work in the general case, but obfuscate real problems at the expense of some corner cases. As I mentioned in another email: We must do more input checking to avoid memory leaks, but on a C level that is the price you pay. Your needs might be different than Sage's and if from your perspective the [potential] performance penalty and also the [in my eyes] much higher debugging complexity are worth it I would be curious to hear how it works out. We do take memory leaks very, very seriously and have found numerous issues in Sage code as well as the external libraries. And if you look at mathematical code these days those leaks that cause trouble are real leaks in the code and not in the corner cases. Once all of those are wiped out we can go after the next problem. > Brian > Cheers, Michael --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://www.sagemath.org -~----------~----~----~----~------~----~------~--~---