> How is you filesystem under database exactly mount? ext3 (rw) Thanks, we'll try new autovacuum settings!
> That was happen after changing checkpoit_segments setting? First, I have to say that load comes and go in waves - we don't yet understood why. All new waves have that behaviour - free disk, idle cpu. First such wave was before checkpoit_segments change, next waves after. No more warnings about too often checkpoints though. Best regards, Dmitriy Shalashov 2014-04-25 13:22 GMT+04:00 Ilya Kosmodemiansky < ilya.kosmodemian...@postgresql-consulting.com>: > On Fri, Apr 25, 2014 at 10:29 AM, Дмитрий Шалашов <skau...@gmail.com> > wrote: > > Previously during load disk was 100% busy; now we have around 100 active > > state queries, 100% loaded proc, but disk is virtually idle... > > That was happen after changing checkpoit_segments setting? > > > > > > > Best regards, > > Dmitriy Shalashov > > > > > > 2014-04-25 12:22 GMT+04:00 Дмитрий Шалашов <skau...@gmail.com>: > > > >> Hi Ilya! > >> > >> > Actually, thise two things are tightly bound and there is no chance to > >> > avoid vacuum, you can only postpone it, this kind of work eventually > >> > supposed to be done. > >> > >> I understand that autovacuum has to be done, but not right after > previous > >> autovacuum? And then again and again. > >> And after cancelling that first autovacuum I started another one by > hand; > >> from there no autovacuum was cancelled. > >> > >> > ionice autovacuum instead of mission critical ckeckpointer or bgwriter > >> Yeah, that was desperate. I restarted server when I had a chance - to > drop > >> my ionice settings back to defaults. > >> > >> > Which exact values have you in the following settings: > >> > >> autovacuum_analyze_scale_factor = 0.1 > >> autovacuum_analyze_threshold = 50 > >> autovacuum_freeze_max_age = 200000000 > >> autovacuum_max_workers = 3 > >> autovacuum_naptime = 60 > >> autovacuum_vacuum_cost_delay = 20 > >> autovacuum_vacuum_cost_limit = -1 > >> autovacuum_vacuum_scale_factor = 0.2 > >> autovacuum_vacuum_threshold = 50 > >> log_autovacuum_min_duration = 0 > >> > >> All defaults except last one I believe. > >> > >> > >> Minwhile I noticed in the night logs: > >> checkpoints are occurring too frequently (138 seconds apart) > >> Consider increasing the configuration parameter "checkpoint_segments". > >> > >> Increased checkpoint_segments to 256 and reloaded config. > >> > >> > >> Best regards, > >> Dmitriy Shalashov > >> > >> > >> 2014-04-25 12:12 GMT+04:00 Ilya Kosmodemiansky > >> <ilya.kosmodemian...@postgresql-consulting.com>: > >> > >>> Hi Dmitry, > >>> > >>> On Fri, Apr 25, 2014 at 9:47 AM, Дмитрий Шалашов <skau...@gmail.com> > >>> wrote: > >>> > cancelled autovacuum and it seems to help. > >>> > >>> > In the morning autovacuum was back. And then it finished and I gone > to > >>> > work. > >>> > >>> Actually, thise two things are tightly bound and there is no chance to > >>> avoid vacuum, you can only postpone it, this kind of work eventually > >>> supposed to be done. > >>> > >>> What you really need to do as a first thing - configure your > >>> autovacuum aggressively enough and then mayde ionice autovacuum > >>> instead of mission critical ckeckpointer or bgwriter. > >>> > >>> Which exact values have you in the following settings: > >>> > >>> autovacuum_analyze_scale_factor > >>> autovacuum_analyze_threshold > >>> autovacuum_freeze_max_age > >>> autovacuum_max_workers > >>> autovacuum_naptime > >>> autovacuum_vacuum_cost_delay > >>> autovacuum_vacuum_cost_limit > >>> autovacuum_vacuum_scale_factor > >>> autovacuum_vacuum_threshold > >>> log_autovacuum_min_duration > >>> > >>> ? > >>> > >>> Best regards, Ilya > >>> > > >>> > Best regards, > >>> > Dmitriy Shalashov > >>> > >>> > >>> > >>> -- > >>> Ilya Kosmodemiansky, > >>> > >>> PostgreSQL-Consulting.com > >>> tel. +14084142500 > >>> cell. +4915144336040 > >>> i...@postgresql-consulting.com > >> > >> > > > > > > -- > Ilya Kosmodemiansky, > > PostgreSQL-Consulting.com > tel. +14084142500 > cell. +4915144336040 > i...@postgresql-consulting.com >