On Wed, 17 Jul 2013, Radu Gheorghe wrote:
Hello,
I have a question, that I can't answer by looking into the code: I see this
in the documentation of imudp <http://www.rsyslog.com/doc/imudp.html>:
----------------
TimeRequery <nbr-of-times>
this is a performance optimization. Getting the system time is very costly.
With this setting, imudp can be instructed to obtain the precise time only
once every n-times. *This logic is only activated if messages come in at a
very fast rate*, so doing less frequent time calls should usually be
acceptable. The default value is two, because we have seen that even
without optimization the kernel often returns twice the identical time. You
can set this value as high as you like, but do so at your own risk. The
higher the value, the less precise the timestamp.
----------------
What does "come in at a very fast rate" mean? How is this judged? I think
this is important in order for me to evaluate if increasing it would hurt
the timestamp accuracy badly or not. For example, if I set it to 10, and I
get 10 logs in an hour, it will be bad :) I suppose it's not the case,
though.
Any pointers will be highly appreciated. Thanks!
I could be wrong here, but this is my understanding.
as long as the imudp module never sleeps (i.e. it reads one message and there is
immediatly another new message waiting), it will go TimeRequery messages between
updating the timestamp.
As soon as it sleeps because there is not a new message to process, it will grab
a new timestamp when there is a new message to process.
As a result, you can set this very large with safety (given that imudp can
process better than gig-E wire speed without a problem).
but the beneifit of this starts fadeing farily quickly.
Way back in the early 3.x days, rsyslog would do up to 5 timestamps per message
(when it was received, when it was put on the main queue, when it was moved from
the main queue to the action queue, when it was pulled from the action queue,
when it was sent), and the gettimeofday calls were a very significant part of
the processing time of a message.
Most of these have been eliminated and Linux now has a much faster gettimeofday
call, so this is less of a problem than it used to be.
But once you go to 10 or 100 messages, the gettimeofday call stops being a very
significant portion of the message processing time.
do some high traffic tests. I'll bet that the CPU utiliization of the imudp
thread will drop noticably when you move from 1 to 10, a bit from 10 to 100, and
not be noticed in the noise when you go from 100 to 1000.
On modern hardware, as long as the imudp thread can have it's own cpu core, I'll
bet you could take it up over 1000000 before you started noticing timestamps
being off by more than a fraction of a second.
Now, if you care about timestamps to 0.0001 second accuracy, you may be more
sensitive to this, but I don't think you will be much more sensitive to it
because the CPU cores are so fast that they really can process a LOT of
messages, and they will always pull a new timestamp when the core gets ahead of
the inbound messages.
David Lang
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE
THAT.