Hi, a little off the general topic but I am just wondering if the “tar -cf - 
pgdata | wc -c” trick can be used as general trick to pre-warm the cache?

Thanks.

On Oct 9, 2014, at 10:55 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:

> On Thu, Oct 9, 2014 at 3:52 AM, pinker <pin...@onet.eu> wrote:
> Hello All,
> I have a brand new machine to tune:
> x3750 M4, 4xE5-4640v2 2.2GHz; 512GBRAM (32x16GB), 4x300GB
> SAS + SSD (Easy Tier) in RAID 10
> 
> What's particularly important now is to choose optimal configuration for
> write operations. We have discussion about checkpoint_segments and
> checkpoint_timeout parameters. Test, which was based on pg_replay, has shown
> that the biggest amount of data is written when checkpoint_segments are set
> to 10 000 and checkpoint_timeout to 30 min, but I'm worrying about amount of
> time needed for crash recovery.
> 
> Since you already have pg_replay running, kill -9 some backend (my favorite 
> victim is the bgwriter) during the middle of pg_replay, and see how long it 
> takes to recover.
>  
> You might want to try it with and without clobbering the FS cache, or simply 
> rebooting the whole machine, depending on what kind of crash you think is 
> more likely.
> 
> Recovering into a cold cache can be painfully slow.  If your database mostly 
> fits in memory, you can speed it up by using something (like "tar -cf - 
> pgdata | wc -c" to) read the entire pg data directory in sequential fashion 
> and hopefully cache it. If you find recovery too slow, you might want to try 
> to this trick (and put it in your init scripts) rather than lowering 
> checkpoint_segments.
> 
> Cheers,
> 
> Jeff
> 

Reply via email to