> Well, your throughput on this machine is horrible. It looks like with
> 8.1 all your time is sys + cpu for your cpus, while with 8.3 you've
> got more idle and more i/o wait, which tells me that 8.3 is smarter
> about vacuuming, so it's spending less time working the cpus and more
> time waiting
well, I can confirm problems is caused by indices here as well - I do
reindex twice a month because of that. (and general performance
problems caused by large index) - maybe it is time to focus on index
re usability - since HOT is already there, autovacuum does the job
too.
--
Sent via pgsql-gene
On Thu, Jan 8, 2009 at 10:43 AM, Dan Armbrust
wrote:
> On PostgreSQL 8.1, while a long vacuum is running, the output of
> vmstat 10 looks like this (sorry, can't format this very will in this
> e-mail client):
>
> r b swpd free buff cache si sobi bo
> in cs us
On PostgreSQL 8.1, while a long vacuum is running, the output of
vmstat 10 looks like this (sorry, can't format this very will in this
e-mail client):
r b swpd free buff cache si sobi bo
in cs us sy id wa st
5 2112 53732 4388 116340400 13524 13
On Wed, Jan 7, 2009 at 10:26 AM, Dan Armbrust
wrote:
> I'm no closer to a solution, but here are some additional data points
> - all taken on Fedora Core 6.
So, what does vmstat 10 say when you're running the "long" vacuum on
each system?
--
Sent via pgsql-general mailing list (pgsql-general@po
I'm no closer to a solution, but here are some additional data points
- all taken on Fedora Core 6.
Postgres 8.1 built from source. Auto vacuum disabled.
Create Empty Database.
Run our load on the system for 2 hours to populate and exercise the database.
Run Vacuum. Takes more than a minute.
R
On Tue, Jan 6, 2009 at 3:36 PM, Tom Lane wrote:
> "Dan Armbrust" writes:
>> INFO: "cpe": found 415925 removable, 50003 nonremovable row versions
>> in 10849 pages
>
>> What on earth could be going on between PostgreSQL 8.1 and Fedora 6
>> that is bloating and/or corrupting the indexes like this?
"Dan Armbrust" writes:
> INFO: "cpe": found 415925 removable, 50003 nonremovable row versions
> in 10849 pages
> What on earth could be going on between PostgreSQL 8.1 and Fedora 6
> that is bloating and/or corrupting the indexes like this?
You're focusing on the indexes when the problem is dea
On Tue, Jan 6, 2009 at 2:07 PM, Dan Armbrust
wrote:
>
> Actually, the customer reported problem is that when they enable
> autovacuum, the performance basically tanks because vacuum runs so
> slow they can't bear to have it run frequently.
Actually this is kinda backwards. What's happening is th
>
> Obviously the choice of operating system has no impact on the contents of
> your index.
>
> A better question might be, what did your application or maintenance
> procedures do different in the different tests?
>
>
> --
> Alan
Our problem for a long time has been assuming the "obvious". But w
> On Tue, Jan 6, 2009 at 1:39 PM, Dan Armbrust
> wrote:
>> Here is an interesting new datapoint.
>>
>> Modern Ubuntu distro - PostgreSQL 8.1. SATA drive. No Raid. Cannot
>> reproduce slow vacuum performance - vacuums take less than a second
>> for the whole database.
>>
>> Reinstall OS - Fedora
On Tuesday 06 January 2009, "Dan Armbrust"
wrote:
> What on earth could be going on between PostgreSQL 8.1 and Fedora 6
> that is bloating and/or corrupting the indexes like this?
Obviously the choice of operating system has no impact on the contents of
your index.
A better question might be,
On Tue, Jan 6, 2009 at 3:01 PM, Alvaro Herrera
wrote:
> Dan Armbrust escribió:
>
>> What on earth could be going on between PostgreSQL 8.1 and Fedora 6
>> that is bloating and/or corrupting the indexes like this?
>
> Postgres 8.1 was slow to vacuum btree indexes. My guess is that your
> indexes a
On Tue, Jan 6, 2009 at 1:39 PM, Dan Armbrust
wrote:
> Here is an interesting new datapoint.
>
> Modern Ubuntu distro - PostgreSQL 8.1. SATA drive. No Raid. Cannot
> reproduce slow vacuum performance - vacuums take less than a second
> for the whole database.
>
> Reinstall OS - Fedora Core 6 - P
Dan Armbrust escribió:
> What on earth could be going on between PostgreSQL 8.1 and Fedora 6
> that is bloating and/or corrupting the indexes like this?
Postgres 8.1 was slow to vacuum btree indexes. My guess is that your
indexes are so bloated that it takes a lot of time to scan them.
I think
Here is an interesting new datapoint.
Modern Ubuntu distro - PostgreSQL 8.1. SATA drive. No Raid. Cannot
reproduce slow vacuum performance - vacuums take less than a second
for the whole database.
Reinstall OS - Fedora Core 6 - PostgreSQL 8.1. Push data through
PostgreSQL for a couple hours (
On Tue, Dec 30, 2008 at 10:14 AM, Dan Armbrust
wrote:
>
> On paper, their hardware is plenty fast for their workload. Out of
> hundreds of sites, all running the same software putting load on the
> database, this is only the second time where we have seen this odd
> behaviour of very slow vacuums
>
>> Their workaround had been to run a daily autovacuum at the lowest load
>> time of day, to cause the least disruption.
>
> What is a "daily autovacuum"? It sounds like some tables just need
> vacuuming more often. If they find that the system is not responsive
> during that, it tells us that
On Tue, Dec 30, 2008 at 10:37:04AM -0600, Dan Armbrust wrote:
> The way that they reported the problem to us was that if they enable
> autovacuum, when ever it runs (about 4 times an hour) it would stop
> processing the things it needed to process, due to table lock
> contention for several minute
On Tue, Dec 30, 2008 at 9:32 AM, Dan Armbrust
wrote:
> Haven't looked at that yet on this particular system. Last time, on
> different hardware when this occurred the vmstat 'wa' column showed
> very large values while vacuum was running. I don't recall what the
> bi/bo columns indicated.
defin
On Tue, Dec 30, 2008 at 9:47 AM, Scott Marlowe wrote:
> Keep in mind, hdparm hits the drive directly, not through the
> filesystem. I use bonnie++ or iozone to test io.
Also dd and vmstat together.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your su
>> INFO: "cpe": found 95498 removable, 18757 nonremovable row versions
>> in 7 pages
>> DETAIL: 0 dead row versions cannot be removed yet.
>> There were 280173 unused item pointers.
>> 0 pages are entirely empty.
>> CPU 5.35s/0.99u sec elapsed 724.38 sec.
>
> How many idle transactions are th
>> INFO: "cpe": found 95498 removable, 18757 nonremovable row versions
>> in 7 pages
>> DETAIL: 0 dead row versions cannot be removed yet.
>> There were 280173 unused item pointers.
>> 0 pages are entirely empty.
>> CPU 5.35s/0.99u sec elapsed 724.38 sec.
>>
>> Then, running vacuum again imme
On Tue, Dec 30, 2008 at 9:10 AM, Dan Armbrust
wrote:
> To follow up on an old thread that I started - I had a customer who
> had a system where manual vacuum runs were taking a very long time to
> run. I was seeing output like this:
>
> INFO: "cpe": found 95498 removable, 18757 nonremovable row
To follow up on an old thread that I started - I had a customer who
had a system where manual vacuum runs were taking a very long time to
run. I was seeing output like this:
INFO: "cpe": found 95498 removable, 18757 nonremovable row versions
in 7 pages
DETAIL: 0 dead row versions cannot be
25 matches
Mail list logo