Postgres is somewhat speed-challenged on aggregate functions.
The most-repeated work-around would be something like:
SELECT u.user_id,
(SELECT activity_date
FROM user_activity
WHERE user_activity.user_id = pp_users.user_id
AND user_activity_type_id = 7
ORDER BY activity_date DESC
LIMIT 1
On Fri, Oct 28, 2005 at 03:40:40PM -0700, Roger Hand wrote:
> You're first joining against the entire user table, then filtering out the
> users
> you don't need.
That's just wrong, sorry -- the planner is perfectly able to push the WHERE
down before the join.
I'd guess the problem is the age()
A little bit more on my last post that I forget to mention. The two
queries run at the same speed and have the same plan only if I have an
index on the user_activity.user_id column. Otherwise they run at
different speeds. The query you gave me actually runs slower without
the index. All this i
These two queries execute at exactly the same speed. When I run run
EXPLAIN on them both they return the *exact* same query plan as well.
I find this strange... but it is also kind of what I expected from
reading up on various things. I am under the impression the
postgresql will break up your q
On October 28, 2005 2:54 PM
Collin Peters wrote:
> I have two tables, one is called 'users' the other is 'user_activity'.
...
> I am trying to write a simple query that returns the last time each
> user logged into the system. This is how the query looks at the
> moment:
>
> SELECT u.user_id, MAX
Rodrigo,
You could use LISTEN + NOTIFY with
triggers.
In after_insert_statement trigger you could notify
a listener, the client could query it immediately.
Best Regards,
Otto
- Original Message -
From:
Rodrigo Madera
To: pgsql-performance@postgresql.org
Sent: Fr
I have two tables, one is called 'users' the other is 'user_activity'.
The 'users' table simply contains the users in the system there is
about 30,000 rows. The 'user_activity' table stores the activities
the user has taken. This table has about 430,000 rows and also
(notably) has a column which
I have a table that holds entries as in a ficticious table Log(id integer, msg text).
Lets say then that I have the program log_tail that has as it´s sole purpose to print newly added data elements.
What is the best solution in terms of performace?
Thank you for your time,
Rodrigo
We've also experienced problems with VACUUM running for a long time.
A VACUUM on our pg_largeobject table, for example, can take over 24
hours to complete (pg_largeobject in our database has over 45million
rows). With our other tables, we've been able to partition them
(using inheritance) to keep
Hi all,
I wonder what is the main driving factor for vacuum's duration: the size
of the table, or the number of dead tuples it has to clean ?
We have a few big tables which are also heavily updated, and I couldn't
figure out a way to properly vacuum them. Vacuuming any of those took
very long amo
Reasons not to buy from Sun or Compaq - why get Opteron 252 when a 240
will do just fine for a fraction of the cost, which of course they
don't stock, white box all the way baby ;). My box from Sun or Compaq
or IBM is 2x the whitebox cost because you can't buy apples to apples.
We have a bitchin'
UNSUBSCRIBE
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
12 matches
Mail list logo