Hi Leif;
> > > Hi Ben; > > > > It will daily store approximately 2.8 Tb data. In our network 20.000 > users > > are exist. The ratio for cached vs new data is approximately %30. Also > that > > system generates 30 Gb log. > > In according to these information what is your recommendations? > > You expect to “write” 2.8TB of new data every day? How much of that is > active typically? It’s difficult to say without knowing usage patterns, but > if cost is a factor (and it usually is :), I’d probably stay with > rotational drives over SSD for a very large data set, and use some of that > money for more RAM. I’d probably go at least 64GB of RAM on such a system. > > Probably, we will use 512 GB RAM on production system. Cache ram size will be 400GB. > As for ATS configs goes, the defaults would probably be just fine with > that setup, at least initially, except for manually setting up the RAM > cache. The one thing you can look at which can reduce memory usage on very > large disk caches is proxy.config.cache.min_average_object_size. Increasing > that will reduce memory consumption, but also reduce the total number of > objects that the cache can hold. So, if you have a lot of very large > objects, increasing this makes sense. > > For 10Gig NICs, there might be interesting things to do around IRQ > balancing, and ring buffers and that sort of stuff. I've done something the increasing the ring buffer by using ethtool like (ethtool -G ethx rx 4078) and used the large buffer offload ( ethtool -K lro on). Btw I've increased the kernel buffer size and ATS in and out r/w buffer size. > I should get our rocket scientist devops guys here to write something up. > Ben and John, are you reading this? :). > > Cheers, > > — Leif > > Best Regards, Ergin