Richard,
> (I think Tip 3 is already fixed in 7.3, or I misunderstand what Josh is
> saying)
Yeah? Certainly looks like it. Apparently I can't keep track.
I'd swear that this issue reared its ugly head again shortly before the 7.4
release.
--
-Josh Berkus
Aglio Database Solutions
San Fr
On Wednesday 21 January 2004 19:06, Josh Berkus wrote:
> Arnau,
>
> > As the number of rows grows the time needed to execute this query
> > takes longer. What'd I should do improve the performance of this query?
>
> Tip #1) add an index to the timestamp column
> Tip #2) make sure that you VAC
Arnau,
> As the number of rows grows the time needed to execute this query takes
> longer. What'd I should do improve the performance of this query?
Tip #1) add an index to the timestamp column
Tip #2) make sure that you VACUUM and ANALYZE regularly
Tip #3) You will get better performance i
On Wed, Jan 21, 2004 at 09:18:18 -0800,
Ron St-Pierre <[EMAIL PROTECTED]> wrote:
>
> My question is in regards to steps 2 and 3 above. Is there some way that
> I can combine both steps into one to save some time?
TIP 4: Don't 'kill -9' the postmaster
SELECT SS.* FROM
(SELECT DISTINCT ON (nonU
Hi all,
I'm quite newbie in SQL and I have a performance problem. I have the
following table (with some extra fields) and without any index:
CREATE TABLE STATISTICS
(
STATISTIC_ID NUMERIC(10) NOT NULL DEFAULT
nextval('STATISTIC_ID_SEQ')
I need to get 200 sets of the most recent data from a table for further
processing, ordered by payDate. My
current solution is to use subselects to:
1 - get a list of unique data
2 - get the 200 most recent records (first 200 rows, sorted descending)
3 - sort them in ascending order
SELECT SSS.*
On Wed, 21 Jan 2004, Harald Fuchs wrote:
> In article <[EMAIL PROTECTED]>,
> Richard Huxton <[EMAIL PROTECTED]> writes:
>
> > On Tuesday 20 January 2004 16:42, Tom Lane wrote:
> >> Harald Fuchs <[EMAIL PROTECTED]> writes:
> >> > Why? If the underlying table has a primary key, finding correspondin
On Wed, 21 Jan 2004, Jeroen Baekelandt wrote:
> jms_messages again. It takes 80 seconds!?! While before, with 1000
> records, it took only a fraction of a second.
run: VACUUM FULL ANALYZE;
--
/Dennis Björklund
---(end of broadcast)---
TIP 7: don
Hi,
From the same Database based on pgbench (TPC-B), I have noted that the
Database size is twice lager in Postgres 7.3.3 & 7.4 than with Oracle
9i.
And don't know why.
Firstly, I thought that as one column name filler type char(88),
char(84) or char(22) according the tables which were not initia
Hi,
I must have missed something, because I get really slow performance from
PG 7.3.4-RH. This is my situation: I have a table (jms_messages), which
contains 500.000 records. select count(*) from jms_messages is
understandable quite slow. Then I delete all of the records and add 1000
new ones. I r
10 matches
Mail list logo