On Tuesday, 15 November 2016 07:09:13 GMT Jerry Lundström wrote:
> On 11/15/16 06:18, Sara Dickinson wrote:
> > -  It requires around an order of magnitude less CPU to compress the C-DNS
> > file.
> Can you clarify this, do you mean that it takes longer to compress PCAP
> then C-DNS?  If so, do you have numbers?  And how did you get them?

We looked at compressing raw PCAP versus the same data encoded as C-DNS as 
part of the C-DNS design process. We ran

$ /usr/bin/time xz -9 <input>

and logged the reported user and system CPU times and maximum resident set 
size. The idea was to get an idea of the resource consumption of the 
compression, and look at the obvious question of whether it was better than 
just compressing PCAP.

We used 'xz -9' to get some idea of the upper bound of achievable compression.

We looked at 5 minute capture samples for 3 different servers, with loadings I 
might roughly describe as light, medium and heavy.

You'll need fixed font to comfortably view the following table of results:

                 File size                    CPU
Traffic Format  Input   Output VM space    User  System   
Light    PCAP   47.92M   4.14M  490.56M    8.27    7.29
Light     C-DNS   5.12M   1.78M  113.44M          3.39     1.19
Medium    PCAP   245.77M        17.96M   690.05M        102.53    15.91
Medium    C-DNS  23.57M   6.88M  276.47M          9.38     0.11
Heavy     PCAP   677.61M        45.25M   690.05M        180.15     6.61
Heavy     C-DNS  62.85M 16.83M   620.37M         29.36     0.95

This is really what one would expect. C-DNS presents the compression with much 
smaller input, so less to chew through, and groups data likely to be similar 
together in its header tables. So the compressor is given an easier job, and 
needs fewer resources.
-- 
Jim Hague - j...@sinodun.com          Never trust a computer you can't lift.

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to