Hi Tomas,
Tomas wrote:
We've decided to remove unneeded 'old' data, which means removing about
99.999% of rows from tables A, C and D (about 2 GB of data). At the
beginning, the B table (containing aggregated from A, C and D) was emptied
(dropped and created) and filled in with current data. Then,
Hi Brandon,
Thanks for your suggestion. I'll think about that one. Part of the
problem is also trying to figure out what the remaining set of
attributes and attribute values are, so that slows it down
considerably too. There are many many combinations of attribute
values that can be click
<[EMAIL PROTECTED]> wrote
> I have a slow sql:
> SELECT * FROM mytable WHERE id IN (1,3,5,7,3k here...);
> mytable is about 10k rows.
>
> if don't use the "IN" clause, it will cost 0,11 second, otherwise it
> will cost 2.x second
> I guess pg use linear search to deal with IN clause, is there
Tomas Vondra wrote:
> > Huh, I didn't suggest to dump/reload. I suggested CLUSTER. You need to
> > apply it only to tables where you have lots of dead tuples, which IIRC
> > are A, C and D.
>
> Sorry, I should read more carefully. Will clustering a table according
> to one index solve problems w
> Huh, I didn't suggest to dump/reload. I suggested CLUSTER. You need to
> apply it only to tables where you have lots of dead tuples, which IIRC
> are A, C and D.
Sorry, I should read more carefully. Will clustering a table according
to one index solve problems with all the indexes on the table
> I guess you're right. I forgot to mention there are 12 composed indexes
> on the largest (and not deleted) table B, having about 14.000.000 rows
> and 1 GB of data. I'll try to dump/reload the database ...
Aaargh, the problem probably is not caused by the largest table, as it
was dropped, filled
Tomas Vondra wrote:
> > Probably the indexes are bloated after the vacuum full. I think the
> > best way to get rid of the "fat" is to recreate both tables and indexes
> > anew. For this the best tool would be to CLUSTER the tables on some
> > index, probably the primary key. This will be much f
> Probably the indexes are bloated after the vacuum full. I think the
> best way to get rid of the "fat" is to recreate both tables and indexes
> anew. For this the best tool would be to CLUSTER the tables on some
> index, probably the primary key. This will be much faster than
> VACUUMing the t
Tom Lane wrote:
> Tomas Vondra <[EMAIL PROTECTED]> writes:
>> 1) drop, create and fill table B (aggregated data from A, C, D)
>> 2) copy 'old' data from A, C and D to A_old, C_old a D_old
>> 3) delete old data from A, C, D
>> 4) dump data from A_old, C_old and D_old
>> 5) truncate tables A, C, D
>>
Added to TODO:
* Experiment with multi-threaded backend better resource utilization
This would allow a single query to make use of multiple CPU's or
multiple I/O channels simultaneously.
Tomas Vondra <[EMAIL PROTECTED]> writes:
> 1) drop, create and fill table B (aggregated data from A, C, D)
> 2) copy 'old' data from A, C and D to A_old, C_old a D_old
> 3) delete old data from A, C, D
> 4) dump data from A_old, C_old and D_old
> 5) truncate tables A, C, D
> 6) vacuum full analyze
Tomas Vondra wrote:
Hi,
> Then
> these data were deleted from A, C, D and tables A_old, C_old and D_old
> were dumped, truncated and all the tables were vacuumed (with FULL
> ANALYZE options). So the procedure was this
>
> 1) drop, create and fill table B (aggregated data from A, C, D)
> 2) copy
Hello,
we have some performance problems with postgres 8.0.4, more precisely
with vacuuming 'large' database with a lot of deleted rows.
We had a 3.2 GB database, consisting mainly from 4 large tables, two of
them (say table A and B) having about 14.000.000 of rows and 1 GB of
size each, and two
On 4/9/06, Chethana, Rao (IE10) <[EMAIL PROTECTED]> wrote:> > > > > Hello! > >
> > Kindly go through the following , > >> >> >
I wanted to know whether, the command line arguments(function
arguments) -- $1 $2 $3 -- can be
used as in the following , like, ---
On sun, 2006-04-09 at 12:47 +0200, Doron Baranes wrote:
> Hi
>
> I am running on postgres 7.4.6 on a pineapp with 512MB RAM.
>
> I did a database vacuum analyze and rebuild my indexes.
If you have previously done a lot of deletes or updates
without regular vacuums, you may have to do a
VACUUM
It'd be helpful if posted with the EXPLAIN of the slow running queries on the respective table.
cool.
L.
On 4/9/06, Doron Baranes <[EMAIL PROTECTED]> wrote:
Hi
I am new at postgres and I'm having performance issues.
I am running on postgres 7.4.6 on a pineapp with 512MB RAM.
I did a databa
Hi
I
am new at postgres and I'm having performance issues.
I
am running on postgres 7.4.6 on a pineapp with 512MB RAM.
I
did a database vacuum analyze and rebuild my indexes.
When
I perform queries on tables of 2M-10M of rows it takes several minutes and
I
see at sar and top that th
Hello!
Kindly go through the following ,
I wanted to know whether, the command line arguments(function
arguments) -- $1 $2 $3 -- can be used as in
the following , like,
CREATE
TYPE TT AS(something,something,……etc……);
CREATE OR REPLAC
18 matches
Mail list logo