Phoenix Kiula wrote:
> We spent some time to do some massive cleaning of the data from
> this table. Brought it down to around 630 million rows. Overall
> size of the table including indexes is about 120GB anyway.
Deleting rows that you don't need is good, and once a vacuum has a
chance to run (
Thank you for the very specific idea of pg_stat_user.
This is what I see (the output is also included in email below, but
this is easier to read) --
https://gist.github.com/anonymous/53f748a8c6c454b804b3
The output here (might become a jumbled mess)--
=# SELECT * from pg_stat_user_tables where
On Sun, Aug 3, 2014 at 3:20 AM, Phoenix Kiula
wrote:
> Hi. I've been patient. PG is 9.0.17, updated via Yum yesterday.
>
> One of my large tables (101 GB on disk, about 1.1 billion rows) used
> to take too long to vacuum. Not sure if it's an index corruption
> issue. But I tried VACUUM FULL ANALY
On 08/03/2014 08:55 PM, Jeff Janes wrote:
Does RAID 1 mean you only have 2 disks in your RAID? If so, that is
woefully inadequate to your apparent workload. The amount of RAM
doesn't inspire confidence, either.
Phoenix, I agree that this is probably the core of the problem you're
having. a 1
On Saturday, August 2, 2014, Phoenix Kiula wrote:
> Hi. I've been patient. PG is 9.0.17, updated via Yum yesterday.
>
> One of my large tables (101 GB on disk, about 1.1 billion rows) used
> to take too long to vacuum.
Too long for what? Rome wasn't build in a day, it might not get vacuumed
in
On 08/02/2014 07:37 PM, Phoenix Kiula wrote:
In your original post you said it was stopping on pg_class so now I am
confused.
No need to be confused. The vacuum thing is a bit tricky for laymen
like myself. The "pg_class" seemed to be associated to this table.
Anyway, even before the upgrade,
On 08/02/2014 07:37 PM, Phoenix Kiula wrote:
In your original post you said it was stopping on pg_class so now I am
confused.
No need to be confused. The vacuum thing is a bit tricky for laymen
like myself. The "pg_class" seemed to be associated to this table.
Anyway, even before the upgrade,
> In your original post you said it was stopping on pg_class so now I am
> confused.
No need to be confused. The vacuum thing is a bit tricky for laymen
like myself. The "pg_class" seemed to be associated to this table.
Anyway, even before the upgrade, the vacuum was stopping at this table
and t
On 08/02/2014 07:02 PM, Phoenix Kiula wrote:
Thanks John.
So what're the right settings? Anyway, right now Postgresql is
servicing only one main connection, which is the REINDEX. All other
stuff is switched off, no one else is connecting to the DB.
My issue with this table was the vaccum proces
Thanks John.
So what're the right settings? Anyway, right now Postgresql is
servicing only one main connection, which is the REINDEX. All other
stuff is switched off, no one else is connecting to the DB.
My issue with this table was the vaccum process would stop at this
table, and take hours. So
On 8/2/2014 6:20 PM, Phoenix Kiula wrote:
PS: CentOS 6 64 bit, 4 GB of RAM, Raid 1 Raptor disks. Postgresql.conf
and TOP output during the running of the REINDEX are below..
POSTGRESQL.CONF-
max_connections = 180
superuser_reserved_connections = 5
shared_buffers
On 08/02/2014 06:20 PM, Phoenix Kiula wrote:
Hi. I've been patient. PG is 9.0.17, updated via Yum yesterday.
One of my large tables (101 GB on disk, about 1.1 billion rows) used
to take too long to vacuum. Not sure if it's an index corruption
issue. But I tried VACUUM FULL ANALYZE as recommended
Hi. I've been patient. PG is 9.0.17, updated via Yum yesterday.
One of my large tables (101 GB on disk, about 1.1 billion rows) used
to take too long to vacuum. Not sure if it's an index corruption
issue. But I tried VACUUM FULL ANALYZE as recommended in another
thread yesterday, which took 5 hour
13 matches
Mail list logo