We have a large database which recently increased dramatically due to a
change in our insert program allowing all entries.
PWFPM_DEV=# select relname,relfilenode,reltuples from pg_class where relname
= 'forecastelement';
relname | relfilenode | reltuples
-+-+--
The index is
Indexes:
"forecastelement_rwv_idx" btree (region_id, wx_element, valid_time)
-Original Message-----
From: Shea,Dan [CIS] [mailto:[EMAIL PROTECTED]
Sent: Monday, April 12, 2004 10:39 AM
To: Postgres Performance
Subject: [PERFORM] Deleting certain duplicates
We ha
Bill, if you had alot of updates and deletions and wanted to optimize your
table, can you just issue the cluster command.
Will the cluster command rewrite the table without the obsolete data that a
vacuum flags or do you need to issue a vacuum first?
Dan.
-Original Message-
From: Bill Mora
link of pgsql_tmp to another parttion to successfully
complete.
Dan.
-Original Message-
From: Bill Moran [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 15, 2004 4:14 PM
To: Shea,Dan [CIS]
Cc: Postgres Performance
Subject: Re: [PERFORM] [ SOLVED ] select count(*) very slow on an
already
To: Shea,Dan [CIS]
Cc: Postgres Performance
Subject: Re: [PERFORM] Deleting certain duplicates
Shea,Dan [CIS] wrote:
>The index is
>Indexes:
>"forecastelement_rwv_idx" btree (region_id, wx_element, valid_time)
>
>-Original Message-
>From: Shea,Dan [CIS] [ma
This vacuum is running a marathon. Why will it not end and show me free
space map INFO? We have deleted a lot of data and I would like to be
confident that these deletions will be used as free space, rather than
creating more table files.
PWFPM_DEV=# select now();vacuum verbose forecasteleme
No, but data is constantly being inserted by userid scores. It is postgres
runnimg the vacuum.
Dan.
-Original Message-
From: Christopher Kings-Lynne [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 20, 2004 12:02 AM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will
ed 1101.26 sec.
-Original Message-
From: Christopher Kings-Lynne [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 20, 2004 9:26 PM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not end?
> No, but data is constantly being inserted by userid scores. It is
p
(15 to 30 every
3 to 20 minutes).
Is the vacuum causing this?
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Friday, April 23, 2004 2:48 PM
To: Shea,Dan [CIS]; 'Christopher Kings-Lynne'
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not e
Manfred is indicating the reason it is taking so long is due to the number
of dead tuples in my index and the vacuum_mem setting.
The last delete that I did before starting a vacuum had 219,177,133
deletions.
Dan.
>Dan,
>> Josh, how long should a vacuum take on a 87 GB table with a 39 GB index?
:[EMAIL PROTECTED]
Sent: Saturday, April 24, 2004 1:57 PM
To: Shea,Dan [CIS]
Cc: 'Josh Berkus'; [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not end?
On Sat, 24 Apr 2004 10:45:40 -0400, "Shea,Dan [CIS]" <[EMAIL PROTECTED]>
wrote:
>[...] 87 GB table with a 39
m: Manfred Koizar [mailto:[EMAIL PROTECTED]
Sent: Saturday, April 24, 2004 8:29 PM
To: Shea,Dan [CIS]
Cc: 'Josh Berkus'; [EMAIL PROTECTED]
Subject: Re: [PERFORM] Why will vacuum not end?
On Sat, 24 Apr 2004 15:58:08 -0400, "Shea,Dan [CIS]" <[EMAIL PROTECTED]>
wrote:
>There we
The pg_resetxlog was run as root. It caused ownership problems of
pg_control and xlog files.
Now we have no access to the data now through psql. The data is still
there under /var/lib/pgsql/data/base/17347 (PWFPM_DEV DB name). But
there is no reference to 36 of our tables in pg_class. Also the
:36 PM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] after using pg_resetxlog, db lost
"Shea,Dan [CIS]" <[EMAIL PROTECTED]> writes:
> The pg_resetxlog was run as root. It caused ownership problems of
> pg_control and xlog files.
> Now we have no acce
--
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 23, 2004 11:41 PM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] after using pg_resetxlog, db lost
"Shea,Dan [CIS]" <[EMAIL PROTECTED]> writes:
> Tom I see you from past emails that you reference u
Tom, thank you for your help.
I increased 000E to 81920 and the databse is working now.
We are using RHAS 3.0 and it does have /dev/zero.
Dan.
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 24, 2004 12:34 PM
To: Shea,Dan [CIS]
Cc: [EMAIL PROTECTED
What is involved, rather what kind of help do you require?
Dan.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Josh Berkus
Sent: Tuesday, September 28, 2004 1:54 PM
To: [EMAIL PROTECTED]
Subject: [PERFORM] Interest in perf testing?
Folks,
I'm beginning
Our database has slowed right down. We are not getting any performance from
our biggest table "forecastelement".
The table has 93,218,671 records in it and climbing.
The index is on 4 columns, origianlly it was on 3. I added another to see
if it improve performance. It did not.
Should there be l
actual time=176.133..276.494
rows=10 loops=1)
Index Cond: (valid_time = '2004-01-23 00:00:00'::timestamp without
time zone)
Total runtime: 276.721 ms
(4 rows)
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 3:01 PM
To: Shea,D
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 3:01 PM
To: Shea,Dan [CIS]; [EMAIL PROTECTED]
Subject: Re: [PERFORM] database performance and query performance
question
Dan,
> Should there be less columns in the index?
> How
;= '2004-01-12 00:00:00'::timestamp without time
zone) AND (valid_time <= '2003-01-12 00:00:00'::timestamp without time
zone))
Total runtime: 49.589 ms
(3 rows)
-Original Message-
From: Hannu Krosing [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 22, 2004 3:54 PM
To:
27;2004-01-13 00:00:00'::timestamp without time
zone))
Total runtime: 472627.148 ms
(3 rows)
-Original Message-
From: Shea,Dan [CIS]
Sent: Thursday, January 22, 2004 4:10 PM
To: 'Hannu Krosing'; Shea,Dan [CIS]
Cc: '[EMAIL PROTECTED]'; [EMAIL PROTECTED]
Subject: RE:
I have had a cluster failure on a table. It most likely was due to space.
I do not not have the error message anymore, but it was indicating that it
was most likely a space problem. The partition was filled to 99%. The
table is about 56 GB and what I believe to be the new table that it was
writi
"Shea,Dan [CIS]" <[EMAIL PROTECTED]> writes:
>> The problem is that it did not clean itself up properly.
>Hm. It should have done so. What were the exact filenames and sizes of
>the not-deleted files?
361716097 to 361716097.39 are 1073741824 bytes.
361716097.40
24 matches
Mail list logo