Hello,
we are experiencing some performance degradation on a database where
the main table is running towards the 100M record. Together with the
slowness of the queries I notice these symptoms:
- size bloat of partial indexes
- very bad planning estimates
I'd appreciate any hint to get a better
Hi All,
(pg 8.3.7 on RHEL 2.6.18-92.el5 )
I ran the query below (copied from
http://pgsql.tapoueh.org/site/html/news/20080131.bloat.html ) on a
production DB we have and I am looking at some pretty nasty looking
numbers for tables in the pg_catalog schema. I have tried a reindex
and vaccum but n
All,
I have a reporter who wants to talk to a data warehousing user of
PostgreSQL (> 1TB preferred), on the record, about 9.0. Please e-mail
me if you are available and when good times to chat would be (and time
zone). Thanks!
--
-- Josh Berkus
On Mon, Sep 20, 2010 at 1:25 PM, mark wrote:
> Hi All,
>
> (pg 8.3.7 on RHEL 2.6.18-92.el5 )
>
> I ran the query below (copied from
> http://pgsql.tapoueh.org/site/html/news/20080131.bloat.html ) on a
> production DB we have and I am looking at some pretty nasty looking
> numbers for tables in th
I'll throw in my 2 cents worth:
1) Performance using RAID 1 for reads sucks. You would expect throughput to
double in this configuration, but it doesn't. That said, performance for
RAID 1 is not noticeably worse than Linux MD. My testing showed the 3Ware
controller to be about 20% faster than Linu
On Mon, Sep 20, 2010 at 2:54 PM, George Sexton wrote:
> I'll throw in my 2 cents worth:
>
> 1) Performance using RAID 1 for reads sucks. You would expect throughput to
> double in this configuration, but it doesn't. That said, performance for
> RAID 1 is not noticeably worse than Linux MD. My test
The autovacuum daemon currently uses the number of inserted and
updated tuples to determine if it should run VACUUM ANALYZE on a
table. Why doesn’t it consider deleted tuples as well?
For example, I have a table which gets initially loaded with several
million records. A batch process grabs the r
Joe Miller wrote:
> I can set up a cron job to run the ANALYZE manually, but it seems
> like the autovacuum daemon should be smart enough to figure this
> out on its own. Deletes can have as big an impact on the stats as
> inserts and updates.
But until the deleted rows are vacuumed from the
Joe Miller writes:
> The autovacuum daemon currently uses the number of inserted and
> updated tuples to determine if it should run VACUUM ANALYZE on a
> table. Why doesnt it consider deleted tuples as well?
I think you misread the code.
Now there *is* a problem, pre-9.0, if your update patter
While everybody else was talking about a new software release or
something today, I was busy finally nailing down something elusive that
pops up on this list regularly. A few weeks ago we just had a thread
named "Performance on new 64bit server compared to my 32bit desktop"
discussing how memo
10 matches
Mail list logo