Colin McGuigan <[EMAIL PROTECTED]> writes:
> I know I can do it by adjusting cost parameters, but I was really
> curious as to why adding a "LIMIT 5000" onto a SELECT from a table with
> only 530 rows in it would affect matters at all.
The LIMIT prevents the sub-select from being flattened into
Tom Lane wrote:
The right way to do it is to adjust the planner cost parameters.
The standard values of those are set on the assumption of
tables-much-bigger-than-memory, a situation in which the planner's
preferred plan probably would be the best. What you are testing here
is most likely a situ
"henk de wit" <[EMAIL PROTECTED]> writes:
> Naively I would say that a planner
> would have to be smart enough to see this by itself?
We got rid of direct tests for redundant WHERE clauses a long time ago
(in 7.4, according to some quick tests I just made). They took a lot
of cycles and almost n
In investigating a slow query, I distiled the code below from a larger
query:
SELECT
*
FROM
/* SUBQUERY banners */ (
SELECT
*
FROM
/* SUBQUERY banners_links */ (
SELECT
* Jeroen Kleijer <[EMAIL PROTECTED]> [070421 23:10]:
>
> Hi all,
>
> I'm a bit new to PostgreSQL and database design in general so forgive me
> for asking stupid questions. ;-)
>
> I've setup a PostgreSQL database on a Linux machine (2 processor, 1GB
> mem) and while the database itself resides
On Saturday 21 April 2007 22:17:42 Jeroen Kleijer wrote:
> I've tried several things but doing a query like:
> select distinct volume from project_access_times
I'm new too but an "order by volume" could help!
In any case maybe a different table design with a separate table for the
"distinct volu
Hi all,
I'm a bit new to PostgreSQL and database design in general so forgive me
for asking stupid questions. ;-)
I've setup a PostgreSQL database on a Linux machine (2 processor, 1GB
mem) and while the database itself resides on a NetApp filer, via NFS,
this doesn't seem to impact the performan
Colin McGuigan <[EMAIL PROTECTED]> writes:
> -> Subquery Scan s (cost=0.00..21.93 rows=1 width=8)
> Filter: ((userid = 123456) AND (locationid IS NULL))
> -> Limit (cost=0.00..15.30 rows=530 width=102)
> -> Seq Scan on staff (cost=0.00..15.30 rows=530 width=102
On Sat, 21 Apr 2007, cluster wrote:
> I have investigated a bit now and found the following:
>
> When I perform the update the *first* time, the triggers are actually
> not evaluated. But from the second update they are.
Are these in one transaction? If so, then right now after the first
update,
I have two tables, staff (530 rows) and location (2.5 million rows). I
do a query that joins the two together, as so:
SELECT s.ProprietorId, l.LocationId, s.RoleId
FROM Location l
INNER JOIN (
SELECT *
FROM Staff
) s ON l.ProprietorId = s.ProprietorId
WHERE s.UserId =
Hello everyone,
This is my first post in here, i am in need of some help...
Wel, i am running PostgreSQL 8.2.4, in a single node, in this machine:
Dell PowerEdge 1950
Intel Xeon 2.33 (2 CPUs)
4GB RAM
Sata HD 160GB
Debian Distribution
Some relevant parameters:
In Linux:
-> SHMMAX: 32MB
In pos
I have investigated a bit now and found the following:
When I perform the update the *first* time, the triggers are actually
not evaluated. But from the second update they are.
Also notice that the number of rows changes. Shouldn't that number of
rows always be 2 as question_id is primary key
12 matches
Mail list logo