On Tue, Jun 18, 2013 at 3:30 PM, David Lang <[email protected]> wrote:

> The overhead of the opens and closes is so high that I expect that you
> just need to scale it to the point where you are keeping them open.
>
> If it's set a lot larger than what you need it to be, it wastes memory
> that you could use for other things (I don't know how much)


It's depending on buffer parameters. By default I think two 64k buffers
(but I may be wrong).


> , and I guess if it's too large it could be expensive to search and find
> that something isn't in there.
>

In current v7, that's no longer a problem, we have switched to a hash table
lookup. Seen some cases with low-thousands of open files and good
performance (that actually made us switch ;)).


>
> But I would expect that these would be fairly minor effects. I don't
> understand why the default is so low.
>
>
Stems back to pre-journald times, when we weighted SOHO vs. enterprise use
case. I should probably now go a bit higher.

Rainer

> David Lang
>
>
> On Tue, 18 Jun 2013, Boylan, James wrote:
>
>  We definitely do have many files being created.
>>
>> I'm starting to do the strace and I see what you mean about tons of open
>> and close actions. At what point does increasing DynaFileCacheSize actually
>> start negatively impacting overall performance? Is there a number that we
>> should keep the cache size under? Or does it just need to be scaled based
>> on the performance of the hardware it is running on?
>>
>> -- James
>>
>>
>> -----Original Message-----
>> From: 
>> [email protected].**com<[email protected]>[mailto:
>> rsyslog-bounces@lists.**adiscon.com <[email protected]>]
>> On Behalf Of David Lang
>> Sent: Monday, June 17, 2013 4:07 PM
>> To: rsyslog-users
>> Subject: Re: [rsyslog] imPTCP module
>>
>> On Mon, 17 Jun 2013, Boylan, James wrote:
>>
>>  Per David and Rainer's suggestion, I've cut us over to this module.
>>> Definitely an improvement for performance.
>>>
>>> I do have one question. The configuration option $InputPTCPHelperThreads
>>> doesn't seem to do anything. I have it set to 12 (It's a 23 core machine)
>>> but it only ever creates 3 threads for the imptcp module.
>>>
>>
>> I think it will use one thread per inbound connection, up to the max.
>>
>> If I remember your prior posts, you only had a handful of systems sending
>> you connections, but they were sending them at very high rates (I could
>> very easily be mixing you up with the other team that had thousands of
>> hosts sending
>> connections)
>>
>> But in any case, this shows that your bottleneck is not on the input side
>> (at least not with imptcp), it's on the output side where you are using 8
>> threads, each using about 1/4 of a core.
>>
>> This makes me think that you have problems in your ruleset that we should
>> look at optimizing.
>>
>> Am I correct in remembering you as the one who started off with 480 very
>> complex if statements and we simplified it down to ~30 if statements?
>>
>> If so, one thing that you need to do is to increase the number of
>> different files that it keeps track of.
>>
>> DynaFileCacheSize defaults to keeping track of 10 files. Since you have
>> ~500 files that you are writing to, I think that you need to set this to
>> 500 or higher.
>>
>> I'll bet that if you were to do a strace of those main Q threads you
>> would find that they are doing a lot of opening and closing of files
>> (pretty close to every message), and increasing the DynaFileCacheSize to
>> something large enough to avoid that would result in a very sharp decrease
>> in the CPU needed, and an even larger increase in the rate of messages
>> written.
>>
>> David Lang
>>
>>  26694 root      20   0 15.9g 7.9g 1480 S 26.8 16.8   3:44.63 rs:main
>>> Q:Reg
>>> 26695 root      20   0 15.9g 7.9g 1480 R 26.3 16.8   3:44.89 rs:main
>>> Q:Reg
>>> 26689 root      20   0 15.9g 7.9g 1480 S 23.8 16.8   3:46.23 rs:main
>>> Q:Reg
>>> 26693 root      20   0 15.9g 7.9g 1480 S 23.5 16.8   3:45.76 rs:main
>>> Q:Reg
>>> 26698 root      20   0 15.9g 7.9g 1480 S 23.5 16.8   3:44.26 rs:main
>>> Q:Reg
>>> 26697 root      20   0 15.9g 7.9g 1480 S 22.8 16.8   3:43.07 rs:main
>>> Q:Reg
>>> 26699 root      20   0 15.9g 7.9g 1480 S 22.8 16.8   3:45.14 rs:main
>>> Q:Reg
>>> 26696 root      20   0 15.9g 7.9g 1480 S 22.0 16.8   3:46.56 rs:main
>>> Q:Reg
>>> 26685 root      20   0 15.9g 7.9g 1480 S  1.8 16.8   0:48.19 in:imptcp
>>> 26690 root      20   0 15.9g 7.9g 1480 S  1.8 16.8   0:28.76 in:imptcp
>>> 26692 root      20   0 15.9g 7.9g 1480 S  1.0 16.8   0:26.70 in:imptcp
>>> 26682 root      20   0 15.9g 7.9g 1480 S  0.0 16.8   0:00.00 rsyslogd
>>> 26683 root      20   0 15.9g 7.9g 1480 S  0.0 16.8   0:00.00 in:immark
>>> 26684 root      20   0 15.9g 7.9g 1480 S  0.0 16.8   0:00.00 in:imudp
>>> 26686 root      20   0 15.9g 7.9g 1480 S  0.0 16.8   0:00.00 in:imuxsock
>>> 26687 root      20   0 15.9g 7.9g 1480 S  0.0 16.8   0:00.00 in:imklog
>>> 26688 root      20   0 15.9g 7.9g 1480 S  0.0 16.8   0:00.00 in:impstats
>>>
>>> --James
>>>
>>> ______________________________**_________________
>>> rsyslog mailing list
>>> http://lists.adiscon.net/**mailman/listinfo/rsyslog<http://lists.adiscon.net/mailman/listinfo/rsyslog>
>>> http://www.rsyslog.com/**professional-services/<http://www.rsyslog.com/professional-services/>
>>> What's up with rsyslog? Follow https://twitter.com/rgerhards NOTE
>>> WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of
>>> sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T
>>> LIKE THAT.
>>>
>>>  ______________________________**_________________
>> rsyslog mailing list
>> http://lists.adiscon.net/**mailman/listinfo/rsyslog<http://lists.adiscon.net/mailman/listinfo/rsyslog>
>> http://www.rsyslog.com/**professional-services/<http://www.rsyslog.com/professional-services/>
>> What's up with rsyslog? Follow https://twitter.com/rgerhards NOTE WELL:
>> This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of sites
>> beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE
>> THAT.
>> ______________________________**_________________
>> rsyslog mailing list
>> http://lists.adiscon.net/**mailman/listinfo/rsyslog<http://lists.adiscon.net/mailman/listinfo/rsyslog>
>> http://www.rsyslog.com/**professional-services/<http://www.rsyslog.com/professional-services/>
>> What's up with rsyslog? Follow https://twitter.com/rgerhards
>> NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
>> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
>> DON'T LIKE THAT.
>>
>>  ______________________________**_________________
> rsyslog mailing list
> http://lists.adiscon.net/**mailman/listinfo/rsyslog<http://lists.adiscon.net/mailman/listinfo/rsyslog>
> http://www.rsyslog.com/**professional-services/<http://www.rsyslog.com/professional-services/>
> What's up with rsyslog? Follow https://twitter.com/rgerhards
> NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
> DON'T LIKE THAT.
>
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

Reply via email to