drieux wrote: > On Nov 30, 2003, at 2:39 PM, R. Joseph Newton wrote: > > > Does the difference between the two memory readings represent a memory > leak?
> > > > I think that is unlikely. If it were a true memory leak, it would have > taken a much geater toll. More like the OS simply lets > the program keep > memory > > that it seems to need, thus sparing the overhead of further requests. > > The second round of loading this hash certainly wnent much more quickly, > > since Perl could use its own internal allocation routines. > > I'm not sure I get your question/assertion here. > Are you attempting to establish that it is > impossible in perl to have memory leaks? Nope. Not at all. Pathological cases can arise within any system. I would say, though, that it generally does take a pathological case to induce such a leak. Things can happen in forking that will do it. So, as you and I have both pointed out, do circular references. I actually agre fully with what I take to be your primary premise: that good program design is the best prevention. > are you asking me to write you leaky perl code > to show you how to do it??? Hmmmm?/ Ah,... well, sure--if you can be conscise about it. > Have you actually read say > > perldoc -q "How can I free an array or hash so my program shrinks" Nope. Don't know what it is about my implementation of perldoc, but the "How do I"s don't render much of use. I just stick to the keywords. I haven't really searched on the problem, because it hasn't arisen. Until this morning, when I cooked up that memory-grabber for the sake of this experiment, I had never seen the Perl process take more than about 4 MB of memory.at any one time. Just found it, with the more-terse "perldoc -q free". Cool. Sounds like its not just on NT that programs retain their memory allocations. > so that you understand that once allocated to the program > that memory is not going to be handed back to the system? > And that as such, that is not a 'memory leak'? Just as > is true in the case of say C, C++, etc??? > > The reason that the 2..N iteration of reusing the same > memory that has been allocated runs faster is because > the memory space has already been allocated to the code. > Whereas in the first instance of growing out code runs > 'slower' is that one is in the process of doing all of > the malloc-ing to get that code ( cf man malloc ). > > Other things you might want to read on would be say: > > <http://www.perldoc.com/perl5.8.0/pod/perl561delta.html#Known-Problems> > <http://www.perldoc.com/perl5.8.0/pod/perldebguts.html#Debugging-Perl-memory-usage> Interesting references. I would say that the first speaks to a rather perverse usage, though. We generally try to steer newbies away from the local keyword, except for strictly scoped tweaks to certain built-in globals. Generally, we just say: "Don't do that". The second was an interesting confirmation of an impression I had when I compared the allocation size to the number of actual hashes and elements in the structure. My statement earlier was incorrect, though, when I said that this structure only got to three dimensions. It only prcessed three dimensions, but it did allocate memory for the fourth, giving it about 0.45 million elements, each an anonymous hash. That makes for about 140-150 bytes per element, at 70-some MB for the entire structure.. It definitely is something to keep in mind, but probably somewhere in the back of your mind. Perl has a lot of overhead for each variable, and each variable is solid as a tank. A very fair trade-off. Unless you are trying, as I was, to grab a big chunk of memory, it usually isn't going to happen. Joseph -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]