Tobi,

I'm curious what benchmarks/tracing you do during development to localize 
performance bottlenecks in rrdtool graphing.

The legend does slow graph generation (23ms for a standard graph no-legend for 
me, 34ms with the legend added) -- worse on long legend graphs; which is one 
reason I implemented legends in datatables (via JSON) for sizable legend 
lengths (50 +) instead of having rrdtool draw the legend ( plus of course 
datatables is a better format for that sort of thing, but I digress. )

What I'm really curious about is this during a graph/xport creation of 25,000 
(25K) datasources rrdtool (perl RRDs) spends considerable time doing this:

mremap(0x7fe0454df000, 56258560, 56262656, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56262656, 56266752, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56266752, 56270848, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56270848, 56274944, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56274944, 56279040, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56279040, 56283136, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56283136, 56287232, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56287232, 56291328, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56291328, 56295424, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56295424, 56299520, MREMAP_MAYMOVE) = 0x7fe0454df000
mremap(0x7fe0454df000, 56299520, 56303616, MREMAP_MAYMOVE) = 0x7fe0454df000


... granted the performance of what it's doing is still very good (99 seconds 
for the graph on 25k datasources for 3 hours of 1 minute data == ~45k 
datapoints calculated per second), it certainly seems to have substantially 
decreasing performance (more than linear) as datasources increase for a 
request.  (note: I say very good in comparison to other numerical data export 
systems.)

Other tests (less datasources longer time frame) have datapoints per second in 
the millions:

5,000 datasources same time-frame/data takes 5.48 seconds == ~164k 
datapoints calculated per second.
100 datasources 90 day time-frame 5 minute data, 1.32 seconds == 1.9 million 
datapoints calculated per second.

Oddly enough the above mremap()'s happen before files are even opened and read. 
 I've strace'd httpd during a request like this and it spends all it's time 
doing mremap's then very quickly reads all datasources and produces the 
graph... more than a minute mremap'ing then seconds open/read/close'ing data 
files and producing the graph.

I mention it happens with xports because I have the dual purpose of requiring 
the above type of merge (it's sum() and rpn math for many datasources) to be 
quick.

It's as if the rrdtool syntax parser is very slow or rrdtool spends a lot of 
time up front creating a 'workspace' in memory to calculate with and constantly 
resizes memory?

Any tips on how to trace down why the above occurs so I can work-on/suggest a 
fix would be helpful.

Thanks,
-Ryan


________________________________
 From: Tobias Oetiker <t...@oetiker.ch>
To: Christoph Anton Mitterer <cales...@scientia.net> 
Cc: rrd-users@lists.oetiker.ch 
Sent: Sunday, August 12, 2012 12:07 AM
Subject: Re: [rrd-users] RRA/RRD tuning
 
... snip...

the biggest win for graphing time would be to integrate support for
setting the png compression level into rrdtool ... libcairo sets
this very high, and thus spends considerable time compressing the
resulting png file ... setting the compression level to 1 should
give a considerable performance gain ...

I will be glad to integrate your patch

cheers
tobi
>
> Thanks for your insights,
> Chris.
>

-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
http://it.oetiker.ch t...@oetiker.ch ++41 62 775 9902 / sb: -9900

_______________________________________________
rrd-users mailing list
rrd-users@lists.oetiker.ch
https://lists.oetiker.ch/cgi-bin/listinfo/rrd-users
_______________________________________________
rrd-users mailing list
rrd-users@lists.oetiker.ch
https://lists.oetiker.ch/cgi-bin/listinfo/rrd-users

Reply via email to