On Tue, Oct 28, 2003 at 05:33:09PM +0000, [EMAIL PROTECTED] wrote: > Tim Bunce <[EMAIL PROTECTED]> wrote: > > On Tue, Oct 28, 2003 at 02:37:29PM +0000, [EMAIL PROTECTED] wrote: > >> > >> I ran some more tests, some of which might be more significant: > >> > >> time(sec) db size (kB) peak RAM (MB) > >> no coverage 15 --- ~ 10 > >> Data::Dumper+eval 246 245 ~ 23.4 > >> Storable 190 60 ~ 19.7 > >> no storage 184 --- ~ 18 > > > > Excellent. From 23.4-18 to 19.7-18 is 5.4 to 1.7. So Storable is > > taking only 30% of the time that Data::Dumper+eval took. > > You're looking at the column for peak RAM usage.
D'oh. One of those days. Still, taking only 30% of the RAM is also good :) > The time difference is > 62 (246-184) vs 6 (190-184). So Storable is taking about 10% of the time > that Dumper+eval took. File IO is now pretty insignificant next to the > overhead of doing coverage. Hopefully that number will come down some, > eventually. Yeap. > >> Eventually, I think that a transition to a real database (where > >> you can read/write only the portions of interest) would be good. > > > > How would you define "portions of interest"? > > The files you're actually adding/updating coverage for. Right now, if > your cover_db holds data for a dozen files, but you test them one at a > time, you have to read and write *all* the coverage data (as well as > have the RAM to hold it). That's a lot of unnecessary work and wasted > memory. Generally there'll be a set of driving scripts (eg test scripts) and a bunch of modules being used by the driving script. Coverage for most of the module source files would be generated by most of the tests. Or am I missing something (I've not looked closely, still). Tim.