On Tue, Feb 14, 2012 at 10:57 PM, Venkat Balaji <venkat.bal...@verse.in> wrote:
>
> On Wed, Feb 15, 2012 at 1:35 AM, Jay Levitt <jay.lev...@gmail.com> wrote:
>>
>> We need to do a few bulk updates as Rails migrations.  We're a typical
>> read-mostly web site, so at the moment, our checkpoint settings and WAL are
>> all default (3 segments, 5 min, 16MB), and updating a million rows takes 10
>> minutes due to all the checkpointing.
>>
>> We have no replication or hot standbys.  As a consumer-web startup, with
>> no SLA, and not a huge database, and if we ever do have to recover from
>> downtime it's ok if it takes longer.. is there a reason NOT to always run
>> with something like checkpoint_segments = 1000, as long as I leave the
>> timeout at 5m?
>
>
> Still checkpoints keep occurring every 5 mins. Anyways
> checkpoint_segments=1000 is huge, this implies you are talking about
> 16MB * 1000 = 16000MB worth pg_xlog data, which is not advisable from I/O
> perspective and data loss perspective. Even in the most unimaginable case if
> all of these 1000 files get filled up in less than 5 mins, there are chances
> that system will slow down due to high IO and CPU.

As far as I know there is no data loss issue with a lot of checkpoint segments.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to