Il 02/03/2016 19:40, Alvaro Herrera ha scritto:
Scott Marlowe wrote:
On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo wrote:
... or maybe add some more RAM to have more disk caching (if you're on
*nix) this worked for me in the past... even if IMHO it's more a
temporary "patch" while upgradin
Scott Marlowe wrote:
> On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo
> wrote:
> > ... or maybe add some more RAM to have more disk caching (if you're on
> > *nix) this worked for me in the past... even if IMHO it's more a
> > temporary "patch" while upgrading (if it can't be done in a hurry)
On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo wrote:
> Il 02/03/2016 16:49, Scott Marlowe ha scritto:
>>
>> On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe
>> wrote:
>>>
>>> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk
>>> wrote:
Hi.
I've noticed that autovac. process worked mo
Il 02/03/2016 16:49, Scott Marlowe ha scritto:
On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe wrote:
On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk wrote:
Hi.
I've noticed that autovac. process worked more than 10 minutes, during this
zabbix logged more than 90% IO disk utilization on db volume
On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe wrote:
> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk wrote:
>> Hi.
>>
>> I've noticed that autovac. process worked more than 10 minutes, during this
>> zabbix logged more than 90% IO disk utilization on db volume
>>
>> ===>29237 2016-03
On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk wrote:
> Hi.
>
> I've noticed that autovac. process worked more than 10 minutes, during this
> zabbix logged more than 90% IO disk utilization on db volume
>
> ===>29237 2016-03-02 15:17:23 EET 0 [24-1]LOG: automatic
> vacuum of table
Hi
2016-03-02 16:25 GMT+01:00 Artem Tomyuk :
> Hi.
>
> I've noticed that autovac. process worked more than 10 minutes, during
> this zabbix logged more than 90% IO disk utilization on db volume
>
> ===>29237 2016-03-02 15:17:23 EET 0 [24-1]LOG: automatic
> vacuum of table "lb_
Hi.
I've noticed that autovac. process worked more than 10 minutes, during this
zabbix logged more than 90% IO disk utilization on db volume
===>29237 2016-03-02 15:17:23 EET 0 [24-1]LOG:
automatic vacuum of table "lb_upr.public._reference32": index scans: 1
pages: 0 rem
tim_wilson writes:
> On a 9.3.1 server , I have a key busy_table in that is hit by most
> transactions running on our system. One DB's copy of this table has 60K rows
> and 1/3 of that tables rows can updated every minute.
> Autovacuum autovacuum_analyze_scale_factor is set 0.02, so that analyse
On a 9.3.1 server , I have a key busy_table in that is hit by most
transactions running on our system. One DB's copy of this table has 60K rows
and 1/3 of that tables rows can updated every minute.
Autovacuum autovacuum_analyze_scale_factor is set 0.02, so that analyse runs
nearly every minute. Bu
On 09/18/2013 10:42 AM, fburg...@radiantblue.com wrote:
My colleague is under the impression that dead tuples are only cleaned up via
vacuum
full only, while I suggested that the autovaccum process was cleaning up
these dead tuples. Is this true?
You are correct. Only VACUUM FULL (or CLUSTER)
We are running PostgreSQL 9.1.6 with autovacuum = on and I am reporting on dead tuples using the pgstattuple extension. Each time I run the pgstattuple package our dead tuples counts decrease. My colleague is under the impression that dead tuples are only cleaned up via vacuum full only, while I su
On Wednesday, January 23, 2013, AJ Weber wrote:
>
>
> Is there a skinny set of instructions on loading pg_stat_statements? Or
> should I just log them and review them from there?
>
Make sure you have installed contrib. (How you do that depends on how you
installed PostgreSQL in the first place.
AJ Weber escribió:
> On 1/23/2013 2:13 PM, Jeff Janes wrote:
> >Scheduling a manual vacuum should be fine (but keep in mind that
> >vacuum has very different default cost_delay settings than autovacuum
> >does. If the server is completely idle that shouldn't matter, but if
> >it is only mostly i
On 1/23/2013 2:13 PM, Jeff Janes wrote:
On Wed, Jan 23, 2013 at 8:53 AM, AJ Weber wrote:
I have a server that is IO-bound right now (it's 4 cores, and top indicates
the use rarely hits 25%, but the Wait spikes above 25-40% regularly).
How long do the spikes last?
From what I can gather, a f
AJ Weber wrote:
> Is it possible that some spikes in IO could be attributable to
> the autovacuum process? Is there a way to check this theory?
Taking a look at the ps aux listing, pg_stat_activity, and pg_locks
should help establish a cause, or at least rule out a number of
possibilities. There
On Wed, Jan 23, 2013 at 8:53 AM, AJ Weber wrote:
> I have a server that is IO-bound right now (it's 4 cores, and top indicates
> the use rarely hits 25%, but the Wait spikes above 25-40% regularly).
How long do the spikes last?
> The
> server is running postgresql 9.0 and tomcat 6. As I have me
On 23.01.2013, at 20:53, AJ Weber wrote:
> I have a server that is IO-bound right now (it's 4 cores, and top indicates
> the use rarely hits 25%, but the Wait spikes above 25-40% regularly). The
> server is running postgresql 9.0 and tomcat 6. As I have mentioned in a
> previous thread,
I have a server that is IO-bound right now (it's 4 cores, and top
indicates the use rarely hits 25%, but the Wait spikes above 25-40%
regularly). The server is running postgresql 9.0 and tomcat 6. As I
have mentioned in a previous thread, I can't alter the hardware to add
disks unfortunately,
"Anibal David Acosta" writes:
> Tables has insert (in bulk every 3 minutes) and delete one per day (delete
> records older than XX days)
No updates at all, just inserts and a daily delete?
If so, you're wasting your time even thinking about suppressing
autovacuum, because it won't fire on this
On 12.12.2011 16:25, Anibal David Acosta wrote:
I have a couple of tables with about 400millions of records increasing about
5 millions per day.
I think that disabling autovac over those tables, and enabling daily manual
vacuum (in some idle hour) will be better.
I am right?
Possibly. If the
...@ringerc.id.au]
Enviado el: lunes, 12 de diciembre de 2011 11:45 a.m.
Para: Anibal David Acosta
CC: pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] autovacuum, exclude table
Top-posting because this is context free:
You need to provide more info for anybody to help you. Are the tables
Top-posting because this is context free:
You need to provide more info for anybody to help you. Are the tables
append-only or are deletes/updates also performed? Also this:
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
On Dec 12, 2011 10:26 PM, "Anibal David Acosta" wrote:
> I h
I have a couple of tables with about 400millions of records increasing about
5 millions per day.
I think that disabling autovac over those tables, and enabling daily manual
vacuum (in some idle hour) will be better.
I am right?
Is possible to exclude autovacuum over some tables?
Tha
On Wed, Dec 7, 2011 at 8:34 AM, Anibal David Acosta wrote:
> Hello, I have a postgres 9.0.2 installation.
>
> Every works fine, but in some hours of day I got several timeout in my
> application (my application wait X seconds before throw a timeout).
>
> Normally hours are not of intensive use, so
Hello, I have a postgres 9.0.2 installation.
Every works fine, but in some hours of day I got several timeout in my
application (my application wait X seconds before throw a timeout).
Normally hours are not of intensive use, so I think that the autovacuum
could be the problem.
Is threre any l
On Tue, Nov 22, 2011 at 10:55 PM, J Ramesh Kumar wrote:
> But the autovacuum is running frequently and it impact the performance of my
> system(high CPU). You can see the autovacuum in the pg_stat_activity.
Could you show us the system metrics that led you to believe it was
high CPU usage? Somet
On 02/12/11 07:18, Robert Haas wrote:
And also please share your views on my decision about disable autovacuum for
my application. I am planning to run vacuum command daily on that small
table which has frequent updates.
Sounds like a bad plan.
If the table has frequent updates vacuuming once
On Wed, Nov 23, 2011 at 12:55 AM, J Ramesh Kumar wrote:
> Why the autovacuum is running even though, I disabled ? Am I miss anything ?
As Raghavendra says, anti-wraparound vacuum will always kick in to
prevent a database shutdown.
> And also please share your views on my decision about disable a
On Wed, Nov 23, 2011 at 11:25 AM, J Ramesh Kumar wrote:
> Hi,
>
> My application is performing 1600 inserts per second and 7 updates per
> second. The updates occurred only in a small table which has only 6 integer
> columns. The inserts occurred in all other daily tables. My application
> creates
Hi,
My application is performing 1600 inserts per second and 7 updates per
second. The updates occurred only in a small table which has only 6 integer
columns. The inserts occurred in all other daily tables. My application
creates around 75 tables per day. No updates/deletes occurred in those 75
d
=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= writes:
> On Wed, Aug 10, 2011 at 2:54 PM, Tom Lane wrote:
>> Hmph. Is there other stuff being run on the same instance? Are there a
>> whole lot of active PG processes? Maybe Amazon isn't really giving you
>> a whole 8GB, or there are weird address space r
On Wed, Aug 10, 2011 at 2:54 PM, Tom Lane wrote:
> Alexis Le-Quoc writes:
>> On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane wrote:
>>> However, I find it a bit odd that you're getting this failure in what
>>> appears to be a 64-bit build. That means you're not running out of
>>> address space, so yo
=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= writes:
> On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane wrote:
>> However, I find it a bit odd that you're getting this failure in what
>> appears to be a 64-bit build. That means you're not running out of
>> address space, so you must actually be out of RAM+swap
On Wed, Aug 10, 2011 at 1:17 PM, Tom Lane wrote:
> Alexis Le-Quoc writes:
>> I've been hitting a "out of memory error" during autovacuum of
>> relatively large tables (compared to the amount of RAM available).
>
>> The error message is:
>> [10236]: [1-1] user=,db=,remote= ERROR: out of memory
>>
=?UTF-8?B?QWxleGlzIEzDqi1RdcO0Yw==?= writes:
> I've been hitting a "out of memory error" during autovacuum of
> relatively large tables (compared to the amount of RAM available).
> The error message is:
> [10236]: [1-1] user=,db=,remote= ERROR: out of memory
> [10236]: [2-1] user=,db=,remote= DE
Greetings,
I've been hitting a "out of memory error" during autovacuum of
relatively large tables (compared to the amount of RAM available). I'm
trying to trace the cause of the issue; the answer is somewhere below
and I don't know how to interpret the data. I can solve the issue
right now by usin
Thanks for your information. I am using postgresql 8.4 and this
version should have already supported HOT. The frequently updated
columns are not indexed columns. So, the frequent updates should not
create many dead records. I also did a small test. If I don't execute
vacuum, the number of pages of
In my experiment, I need about 1~3 min to finish the analyze operation
on the big table (which depends on the value of vacuum_cost_delay). I
am not surprised because this table is a really big one (now, it has
over 200M records).
However, the most of my concerns is the behavior of analyze/vacuum.
Excerpts from kuopo's message of dom nov 21 11:15:52 -0300 2010:
> In my experiment, I need about 1~3 min to finish the analyze operation
> on the big table (which depends on the value of vacuum_cost_delay). I
> am not surprised because this table is a really big one (now, it has
> over 200M record
> Excerpts from kuopo's message of jue nov 18 04:10:36 -0300 2010:
>> However, when I analyze the table A, the autovacuum or vacuum on the
>> table B cannot find any removable row version (the number of
>> nonremoveable row versions and pages keeps increasing). After the
>> analysis finishes, the s
Excerpts from kuopo's message of jue nov 18 04:10:36 -0300 2010:
> Hi,
>
> Thanks for your response. I've checked it again and found that the
> main cause is the execution of ANALYZE. As I have mentioned, I have
> two tables: table A is a big one (around 10M~100M records) for log
> data and table
Hi,
Thanks for your response. I've checked it again and found that the
main cause is the execution of ANALYZE. As I have mentioned, I have
two tables: table A is a big one (around 10M~100M records) for log
data and table B is a small one (around 1k records) for keeping some
current status. There a
Excerpts from kuopo's message of vie nov 12 05:01:24 -0300 2010:
> Hi,
>
> I have a question about the behavior of autovacuum. When I have a big
> table A which is being processed by autovacuum, I also manually use
> (full) vacuum to clean another table B. Then I found that I always got
> somethin
Hi,
I have a question about the behavior of autovacuum. When I have a big
table A which is being processed by autovacuum, I also manually use
(full) vacuum to clean another table B. Then I found that I always got
something like “found 0 removable, 14283 nonremovable row”. However,
if I stop the au
On Thu, May 27, 2010 at 9:01 AM, venu madhav wrote:
> Thanks for the reply..
> I am using postgres 8.01 and since it runs on a client box, I
> can't upgrade it. I've set the auto vacuum nap time to 3600 seconds.
You've pretty much made autovac run every 5 hours with that setting.
What
Thanks for the reply..
I am using postgres 8.01 and since it runs on a client box, I
can't upgrade it. I've set the auto vacuum nap time to 3600 seconds.
On Thu, May 27, 2010 at 8:03 PM, Bruce Momjian wrote:
> venu madhav wrote:
> > Hi All,
> >In my application we are using po
Excerpts from venu madhav's message of vie may 21 05:38:43 -0400 2010:
> Hi All,
>In my application we are using postgres which runs on an embedded
> box. I have configured autovacuum to run once for every one hour. It has 5
> different databases in it. When I saw the log messages, I found
venu madhav wrote:
> Thanks for the reply..
>I am using postgres 8.01 and since it runs on a client box, I
> can't upgrade it. I've set the auto vacuum nap time to 3600 seconds.
That is an older version of autovacuum that wasn't very capable.
--
venu madhav wrote:
> Hi All,
>In my application we are using postgres which runs on an embedded
> box. I have configured autovacuum to run once for every one hour. It has 5
> different databases in it. When I saw the log messages, I found that it is
> running autovacuum on one database ever
One more question " Is is expected ?"
On Fri, May 21, 2010 at 3:08 PM, venu madhav wrote:
> Hi All,
>In my application we are using postgres which runs on an embedded
> box. I have configured autovacuum to run once for every one hour. It has 5
> different databases in it. When I saw the l
Hi All,
In my application we are using postgres which runs on an embedded
box. I have configured autovacuum to run once for every one hour. It has 5
different databases in it. When I saw the log messages, I found that it is
running autovacuum on one database every hour. As a result, on my da
On Sat, May 1, 2010 at 1:11 PM, Greg Smith wrote:
> Robert Haas wrote:
>>
>> I don't have a stake in the ground on what the right settings are, but
>> I think it's fair to say that if you vacuum OR analyze much less
>> frequently than what we recommend my default, it might break.
>>
>
> I think th
On Sat, May 1, 2010 at 1:17 PM, Scott Marlowe wrote:
> On Sat, May 1, 2010 at 1:08 PM, Robert Haas wrote:
>> On Sat, May 1, 2010 at 12:13 PM, Scott Marlowe
>> wrote:
>>> On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus wrote:
Which is the opposite of my experience; currently we have several
>
On Sat, May 1, 2010 at 1:08 PM, Robert Haas wrote:
> On Sat, May 1, 2010 at 12:13 PM, Scott Marlowe
> wrote:
>> On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus wrote:
>>> Which is the opposite of my experience; currently we have several
>>> clients who have issues which required more-frequent anal
On Sat, May 1, 2010 at 12:13 PM, Scott Marlowe wrote:
> On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus wrote:
>> Which is the opposite of my experience; currently we have several
>> clients who have issues which required more-frequent analyzes on
>> specific tables. Before 8.4, vacuuming more fre
Greg Smith writes:
> If anything, I'd expect people to want to increase how often it runs,
> for tables where much less than 20% dead is a problem. The most common
> situation I've seen where that's the case is when you have a hotspot of
> heavily updated rows in a large table, and this may ma
Robert Haas wrote:
I don't have a stake in the ground on what the right settings are, but
I think it's fair to say that if you vacuum OR analyze much less
frequently than what we recommend my default, it might break.
I think the default settings are essentially minimum recommended
frequenci
On Fri, Apr 30, 2010 at 4:50 PM, Josh Berkus wrote:
> Which is the opposite of my experience; currently we have several
> clients who have issues which required more-frequent analyzes on
> specific tables. Before 8.4, vacuuming more frequently, especially on
> large tables, was very costly; vacu
On Wed, Apr 28, 2010 at 8:20 AM, Thomas Kellerer wrote:
> Rick, 22.04.2010 22:42:
>>
>> So, in a large table, the scale_factor is the dominant term. In a
>> small table, the threshold is the dominant term. But both are taken into
>> account.
>>
>> The default values are set for small tables; it is
On Fri, Apr 30, 2010 at 6:50 PM, Josh Berkus wrote:
> Which is the opposite of my experience; currently we have several
> clients who have issues which required more-frequent analyzes on
> specific tables.
That's all fine, but probably not too relevant to the original
complaint - the OP backed of
Josh Berkus escribió:
> #autovacuum_vacuum_scale_factor = 0.2
>
> This is set because in my experience, 20% bloat is about the level at
> which bloat starts affecting performance; thus, we want to vacuum at
> that level but not sooner. This does mean that very large tables which
> never have mor
> My guess is that the reason we run ANALYZE more frequently than vacuum
> (with the default settings) is that ANALYZE is pretty cheap. In many
> cases, if the statistical distribution of the data hasn't changed
> much, then it's not really necessary, but it doesn't cost much either.
> And for c
akp geek, 28.04.2010 16:37:
We have 8.4, which of AUTOVACUUM PARAMETERS can be set to handle individual
table?
All documented here:
http://www.postgresql.org/docs/current/static/sql-createtable.html
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
Check out the manual:
http://www.postgresql.org/docs/8.4/static/routine-vacuuming.html#AUTOVACUUM
Cheers,
Ken
On Wed, Apr 28, 2010 at 10:37:35AM -0400, akp geek wrote:
> Hi -
>don't want to side track the discussion. We have 8.4, which of
> AUTOVACUUM PARAMETERS can be set to handle
Hi -
don't want to side track the discussion. We have 8.4, which of
AUTOVACUUM PARAMETERS can be set to handle individual table? I ran into
bloat with small table only. Now the issue is being resolved.
Regards
On Wed, Apr 28, 2010 at 10:20 AM, Thomas Kellerer wrote:
> Rick, 22.04.2010
Rick, 22.04.2010 22:42:
So, in a large table, the scale_factor is the dominant term. In a
small table, the threshold is the dominant term. But both are taken into
account.
The default values are set for small tables; it is not being run for
large tables.
With 8.4 you can adjust the autovacuum
Robert Haas wrote:
> Rick wrote:
>> Since vacuum just recovers space, that doesn't seem to be nearly
>> as critical for performance?
>
> That doesn't really match my experience. Without regular
> vacuuming, tables and indices end up being larger than they ought
> to be and contain large amount
On Thu, Apr 22, 2010 at 4:42 PM, Rick wrote:
> On Apr 22, 2:55 pm, robertmh...@gmail.com (Robert Haas) wrote:
>> On Wed, Apr 21, 2010 at 11:06 AM, Rick wrote:
>> > I have a DB with small and large tables that can go up to 15G.
>> > For performance benefits, it appears that analyze has much less c
Rick wrote:
> So, in a large table, the scale_factor is the dominant term. In a
> small
> table, the threshold is the dominant term. But both are taken into
> account.
Correct.
> The default values are set for small tables; it is not being run for
> large tables.
So decrease the scale factor an
On Apr 22, 2:55 pm, robertmh...@gmail.com (Robert Haas) wrote:
> On Wed, Apr 21, 2010 at 11:06 AM, Rick wrote:
> > I have a DB with small and large tables that can go up to 15G.
> > For performance benefits, it appears that analyze has much less cost
> > than vacuum, but the same benefits?
>
> Err
On Wed, Apr 21, 2010 at 11:06 AM, Rick wrote:
> I have a DB with small and large tables that can go up to 15G.
> For performance benefits, it appears that analyze has much less cost
> than vacuum, but the same benefits?
Err, no. ANALYZE gathers statistics for the query planner; VACUUM
clears out
I have a DB with small and large tables that can go up to 15G.
For performance benefits, it appears that analyze has much less cost
than vacuum, but the same benefits?
I can’t find any clear recommendations for frequencies and am
considering these parameters:
Autovacuum_vacuum_threshold = 5
Au
]
Sent: Monday, March 01, 2010 6:51 AM
To: Plugge, Joe R.
Cc: Scott Marlowe; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Autovacuum Tuning advice
storing all fields as varchar surely doesn't make:
- indicies small,
- the thing fly,
- tables small.
...
storing all fields as varchar surely doesn't make:
- indicies small,
- the thing fly,
- tables small.
...
---++-+---
public | log_events_y2010m02| 356 GB | 610 GB
-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com]
Sent: Monday, March 01, 2010 12:58 AM
To: Plugge, Joe R.
Cc: pgsql-performanc
Joe wrote:
> I have a very busy system that takes about 9 million inserts per day and each
> record gets
> updated at least once after the insert (all for the one same table), there
> are other tables that
> get hit but not as severely. As suspected I am having a problem with table
> bloat.
On Sun, Feb 28, 2010 at 8:09 PM, Plugge, Joe R. wrote:
> I have a very busy system that takes about 9 million inserts per day and each
> record gets updated at least once after the insert (all for the one same
> table), there are other tables that get hit but not as severely. As
> suspected I
I have a very busy system that takes about 9 million inserts per day and each
record gets updated at least once after the insert (all for the one same
table), there are other tables that get hit but not as severely. As suspected
I am having a problem with table bloat. Any advice on how to be m
On Wed, Jul 29, 2009 at 12:47 PM, Doug Hunley wrote:
> When reviewing the vacuum logs, I notice that on any given day
> autovacuum only seems to touch four of the tables in one of our
> schemas (not counting toast tables). However, if I look at the
> pgstatspack output for the same day, I see that
When reviewing the vacuum logs, I notice that on any given day
autovacuum only seems to touch four of the tables in one of our
schemas (not counting toast tables). However, if I look at the
pgstatspack output for the same day, I see that there are plenty of
other tables receiving a high number of i
Tom Lane [...@sss.pgh.pa.us] wrote:
Oh, and don't forget the more-complete pg_locks state. We'll want all
the columns of pg_locks, not just the ones you showed before.
auto vacuum of ts_user_sessions_map has been running for > 17 hours.
This table has 2,204,488 rows. I hope that I've captured e
Brian Cox writes:
> OK. You mentioned strace. It's got a lot of options; any in particular
> that would be useful if this happens again?
Oh, and don't forget the more-complete pg_locks state. We'll want all
the columns of pg_locks, not just the ones you showed before.
r
Brian Cox writes:
> OK. You mentioned strace. It's got a lot of options; any in particular
> that would be useful if this happens again?
I'd just do "strace -p processID" and watch it for a little while.
If it's not hung, you'll see the process issuing kernel calls at
some rate or other.
If it
Tom Lane [...@sss.pgh.pa.us] wrote:
No, no, and no. What would be best is to find out what actually
happened. The evidence is gone now, but if you see it again please
take a closer look.
OK. You mentioned strace. It's got a lot of options; any in particular
that would be useful if this happens
Brian Cox writes:
> Dp you think it would be better to manually
> vacuum these tables? If so, would it be best to disable autovacuum of
> them? And while I'm at it, if you disable autovacuum of the master table
> will that disable it for the actual partitions?
No, no, and no. What would be be
Tom Lane [...@sss.pgh.pa.us] wrote:
They might have been blocked behind some other process that was sitting
in an open transaction for some reason. The other likely cause is badly
chosen autovacuum delay, but I think that was already covered.
Well, after I noticed this running for a while, I shu
Brian Cox writes:
> Tom Lane [...@sss.pgh.pa.us] wrote:
>> Are those processes actually doing anything, or just waiting? strace
>> or local equivalent would be the most conclusive check.
> These must not have been hung, because they finally completed (after
> 10-15 hrs - some time between 11pm
Tom Lane [...@sss.pgh.pa.us] wrote:
Are those processes actually doing anything, or just waiting? strace
or local equivalent would be the most conclusive check.
These must not have been hung, because they finally completed (after
10-15 hrs - some time between 11pm and 8am). Question is why doe
Brian Cox writes:
> autovacuum has been running on 2 tables for > 5 hours. There tables are
> not huge (see below). For the past ~1 hour, I've shut off all other
> activity on this database. The other table being vacuumed has more rows
> (1897810). Anyone have any ideas about why this is taking
Alvaro Herrera [alvhe...@commandprompt.com] wrote:
What's vacuum_cost_delay?
#vacuum_cost_delay = 0 # 0-1000 milliseconds
#vacuum_cost_page_hit = 1 # 0-1 credits
#vacuum_cost_page_miss = 10 # 0-1 credits
#vacuum_cost_page_dirty = 20#
Brian Cox wrote:
> autovacuum has been running on 2 tables for > 5 hours. There tables are
> not huge (see below). For the past ~1 hour, I've shut off all other
> activity on this database. The other table being vacuumed has more rows
> (1897810). Anyone have any ideas about why this is takin
autovacuum has been running on 2 tables for > 5 hours. There tables are
not huge (see below). For the past ~1 hour, I've shut off all other
activity on this database. The other table being vacuumed has more rows
(1897810). Anyone have any ideas about why this is taking so long?
Thanks,
Brian
Thanks for the help. The applied solution follows. We will be taking a
number of maintenance steps to manage these very high update tables
which I will summarize later as I suspect we are not the only ones with
this challenge.
http://www.postgresql.org/docs/current/interactive/routine-vacuum
On Tue, Aug 26, 2008 at 10:45:31AM -0600, Jerry Champlin wrote:
> This makes sense. What queries can I run to see how close to the limit
> we are? We need to determine if we should stop the process which
> updates and inserts into this table until after the critical time this
> afternoon wh
This makes sense. What queries can I run to see how close to the limit
we are? We need to determine if we should stop the process which
updates and inserts into this table until after the critical time this
afternoon when we can perform the required maintenance on this table.
hubert depesz l
On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:
>
> Does anyone know what will cause this bahavior for autovacuum?
You're probably approaching the wraparound limit in some database.
If you think you can't afford the overhead when users are accessing
the system, when are you vacu
On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:
> Does anyone know what will cause this bahavior for autovacuum?
http://www.postgresql.org/docs/current/interactive/runtime-config-autovacuum.html
-> autovacuum_freeze_max_age
depesz
--
Linkedin: http://www.linkedin.com/in/depesz
We are attempting to turn off autovacuum but it keeps coming back. We
can't afford the performance hit from vacuum while end users are
accessing our system.
Postgresql Version: 8.3.3
OS: Linux 2.6.18-53.el5PAE #1 SMP
Running PostgreSQL setting:
sspg=# show autovacuum;
autovacuum
-
On Wed, 2008-08-13 at 21:30 -0400, Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> It seems like we'll want to do it somehow. Perhaps the cleanest way is
> >> to incorporate toast-table settings in the reloptions of the parent
> >> table. Otherwise dump/relo
1 - 100 of 256 matches
Mail list logo