"=?iso-8859-1?Q?Vegard_B=F8nes?=" <[EMAIL PROTECTED]> writes:
> Running VACUUM VERBOSE pg_largeobject took quite some time. Here's the
> output:
> INFO: vacuuming "pg_catalog.pg_largeobject"
> INFO: index "pg_largeobject_loid_pn_index" now contains 11060658 row
> versions in 230587 pages
> DETAI
> "=?iso-8859-1?Q?Vegard_B=F8nes?=" <[EMAIL PROTECTED]> writes:
>> I have a problem with large objects in postgresql 8.1: The performance
>> of loading large objects into a database goes way down after a few
>> days of operation.
>
>> I have a cron job kicking in twice a day, which generates and lo
Thanks Gregory,
I was on IRC yesterday and a few people indicated the same thing...
Searching for the last reading is a very important function for our
database. I wrote the below function searches all child tables for the
max. It is not optimization because it doesn't omit tables by look at t
Hi Craig,
Thank you for your answer.
Here are my test table and indecies definitions:
-- document id to category id
create table doc_to_cat (
doc_id integer not null,
cat_id integer not null
) with (oids=false);
-- Total 1m rows. 500k unique document ids. 20k unique category ids. Each
doc_
Kevin Kempter schrieb:
Hi All;
I'm looking for tips / ideas per performance tuning some specific queries.
These are generally large tables on a highly active OLTP system
(100,000 - 200,000 plus queries per day)
First off, any thoughts per tuning inserts into large tables. I have a large
tab
am Wed, dem 26.11.2008, um 21:21:04 -0700 mailte Kevin Kempter folgendes:
> Next we have a select count(*) that also one of the top offenders:
>
> select count(*) from public.tab3 where user_id=31
> and state='A'
> and amount>0;
>
> QUERY PLAN
First off, any thoughts per tuning inserts into large tables. I have a
large
table with an insert like this:
insert into public.bigtab1 (text_col1, text_col2, id) values ...
QUERY PLAN
--
Result (cost=0.00..0.01 rows=1 width=0)
(1 ro