I'd like to discuss the consequences of the remark below in a bit more depth:
On Fri, Sep 20, 2013 at 08:27:47PM +0000, bugzilla-dae...@wireshark.org wrote: > https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=9114 ... > --- Comment #1 from Jeff Morriss <jeff.morriss...@gmail.com> --- > Wireshark intentionally does not free all the memory it had allocated when > closing a capture file. It uses its own memory allocator which allows it to > keep (rather large) blocks of memory around for later re-use (when that > happens > the memory allocator marks all the memory as "freed" but you won't be able to > see that from any OS utilities: they will simply report Wireshark has still > having allocated however much memory). Not freeing that memory is an > optimization to avoid having to re-allocate that memory again when the next > file is read. It this really the right strategy? If I open a huge capture file (with huge allocations) and then open a small file, the (virtual) memory will still be gone. How big is the performance win of not freeing/allocating in real operation? thanks Jörg -- Joerg Mayer <jma...@loplof.de> We are stuck with technology when what we really want is just stuff that works. Some say that should read Microsoft instead of technology. ___________________________________________________________________________ Sent via: Wireshark-dev mailing list <wireshark-dev@wireshark.org> Archives: http://www.wireshark.org/lists/wireshark-dev Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev mailto:wireshark-dev-requ...@wireshark.org?subject=unsubscribe