On 4/30/10 8:30 AM, "Tim Cutts" <t...@sanger.ac.uk> wrote:
> On 30 Apr 2010, at 4:06 pm, Paul Krizak wrote:
> 
>> Has anybody out there ever tried scaling up a cfengine server (v2.1 or
>> v2.2) on a really big, fast server?  I'm thinking on the order of 4
>> sockets, 24 cores, and a 10Gbit NIC.
>> 
>> This is to support a particularly massive (and temporary) flood of
>> cfagent requests to synchronize their local policy.  It's going to be a
>> lot easier to scale the server up in this case rather than adjust the
>> policy to distribute requests to multiple cfservd's.
> 
> How many clients are you talking about?  And how much policy?  I have 2300
> clients updating policy once an hour from a small 1GigE-connected, dual socket
> server (four cores total) which also runs Splunk and nagios, so is quite busy
> with other things, and it copes just fine, with a load average of 0.38.  Total
> size of all policy files on my setup is 2.9 MB.  cfengine version is 2.2.8.
> The SplayTime is also one hour, so the cfengine load on the server is more or
> less steady.

If you've got a policy or a script that builds your cfengine servers (and
you should), it's not hard to build more cfservds (well, technically,
cfservd is usually running everywhere...and all our hosts are clients and
servers...but you know what I mean).

So...  Why not stick a few of them behind a load balanced VIP?  DSR would be
best in this case, since it would off-load return traffic and scale the
NETWORK INTERFACE, RAM, etc (not just add cores).

This is generally how you scale throughput for any other server farm
(youtube.com doesn't run on one massive server).  I've said it before --
load balancers can be cheap and free.  I've supported some very popular
ecommerce sites using nothing but commodity hardware and OSS (I grumbled a
lot, but it worked).

_______________________________________________
Help-cfengine mailing list
Help-cfengine@cfengine.org
https://cfengine.org/mailman/listinfo/help-cfengine

Reply via email to