On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote:
> After a week testing I think I can answer the question above: does it work
> like it's supposed to under PostgreSQL?
>
> YES
>
> The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro,
> http://www.newegg.com/Product/Product.aspx?Item=N82
Merlin Moncure wrote:
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote:
Postgres settings:
8.4.4
--with-blocksize=4
I saw about 10% increase in performance compared to 8KB blocksizes.
That's very interesting -- we need more testing in that department...
Definately - that 10% num
Thanks.
So am I right in assuming that the aggregate sub-query ( against work_active )
results will not assist with constraint exclusion in the sub-query against
work_unit (if we introduce range partitions on this table)?
Mr
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Mark Rostron writes:
> So am I right in assuming that the aggregate sub-query ( against work_active
> ) results will not assist with constraint exclusion in the sub-query against
> work_unit (if we introduce range partitions on this table)?
Dunno. You didn't actually show what you were hoping
Hi,
I'm running PostgreSQL 8.3 and I have a query with a couple of NOT IN
subqueries:
DELETE FROM foo WHERE type = 'o' AND b NOT IN (SELECT cqc.b FROM bar
cqc) AND b NOT IN (SELECT car.b FROM foo car WHERE car.type != 'o');
The plan produced for this is:
Maciek Sakrejda wrote:
> DELETE FROM foo WHERE type = 'o' AND b NOT IN (SELECT cqc.b FROM
> bar cqc) AND b NOT IN (SELECT car.b FROM foo car WHERE car.type !=
> 'o');
Can "b" be null in any of these tables? If not, then you can
rewrite your query to us NOT EXISTS and have the same semantics.
> Can "b" be null in any of these tables? If not, then you can
> rewrite your query to us NOT EXISTS and have the same semantics.
> That will often be much faster.
Thanks, Kevin.
No NULLs. It looks like it's a good deal slower than the LOJ version,
but a good deal faster than the original. Since
Maciek Sakrejda wrote:
> No NULLs. It looks like it's a good deal slower than the LOJ
> version, but a good deal faster than the original.
On 8.4 and later the NOT EXISTS I suggested is a bit faster than
your fast version, since Tom did some very nice work in this area,
implementing semi join
Hi,
On Mon, Aug 02, 2010 at 12:12:51PM -0700, Maciek Sakrejda wrote:
> I'm running PostgreSQL 8.3 and I have a query with a couple of NOT IN
> subqueries:
With 8.3 you will have to use manual antijoins (i.e LEFT JOIN
... WHERE NULL). If you use 8.4 NOT EXISTS() will do that
automatically in many c
All fields involved are declared NOT NULL, but thanks for the heads up.
---
Maciek Sakrejda | System Architect | Truviso
1065 E. Hillsdale Blvd., Suite 215
Foster City, CA 94404
(650) 242-3500 Main
www.truviso.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To m
With Oracle, I've found an anti-union (MINUS in Oracle, EXCEPT in PGSQL) to
be often a bit better than an anti-join, which is in turn faster than NOT
IN. Depends of course on row distribution and index layouts, and a bunch of
other details.
Depending on what you're returning, it can pay to make su
On Mon, Aug 02, 2010 at 01:06:00PM -0700, Maciek Sakrejda wrote:
> All fields involved are declared NOT NULL, but thanks for the heads up.
Afair the planner doesnt use that atm.
Andres
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscript
>> All fields involved are declared NOT NULL, but thanks for the heads up.
>Afair the planner doesnt use that atm.
I was referring to not having to care about the strange NULL semantics
(as per your original comment), since I have no NULLs. Given that, I
think the NOT EXISTS could be a good soluti
Dave Crooke wrote:
> With Oracle, I've found an anti-union (MINUS in Oracle, EXCEPT in
> PGSQL) to be often a bit better than an anti-join, which is in
> turn faster than NOT IN. Depends of course on row distribution and
> index layouts, and a bunch of other details.
I found that assertion int
I already had effective_cache_size set to 500MB.
I experimented with lowering random_page_cost to 3 then 2. It made no
difference in the choice of plan that I could see. In the explain analyze
output the estimated costs of nested loop were in fact lowererd, but so were
the costs of the hash joi
"Kevin Grittner" wrote:
> DELETE FROM foo
> where foo.b in (
> select b from foo WHERE type = 'o'
> except SELECT b FROM bar
> except SELECT b FROM foo where type <> 'o');
Oops. Maybe before I get excited I should try it with a query which
is actually logically equivalent. :-(
Kevin Grittner wrote:
> Maybe before I get excited I should try it with a query which is
> actually logically equivalent.
Fixed version:
DELETE FROM foo
where type = 'o' and foo.b in (
select b from foo WHERE type = 'o'
except SELECT b FROM bar
except SELECT b FROM foo where type <>
On Mon, Aug 02, 2010 at 01:35:13PM -0700, Maciek Sakrejda wrote:
> >> All fields involved are declared NOT NULL, but thanks for the heads up.
> >Afair the planner doesnt use that atm.
>
> I was referring to not having to care about the strange NULL semantics
> (as per your original comment), since
> Definately - that 10% number was on the old-first hardware (the core 2
> E6600). After reading my post and the 185MBps with 18500 reads/s number
> I was a bit suspicious whether I did the tests on the new hardware with
> 4K, because 185MBps / 18500 reads/s is ~10KB / read, so I thought thats
> a
>> Maybe before I get excited I should try it with a query which is
>> actually logically equivalent.
Yes, the joys of manual rewrites...
> Fixed version:
>
> DELETE FROM foo
> where type = 'o' and foo.b in (
> select b from foo WHERE type = 'o'
> except SELECT b FROM bar
> except SELECT b FR
Josh Berkus wrote:
That doesn't make much sense unless there's some special advantage to a
4K blocksize with the hardware itself.
Given that pgbench is always doing tiny updates to blocks, I wouldn't be
surprised if switching to smaller blocks helps it in a lot of situations
if one went looki
On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith wrote:
> Josh Berkus wrote:
>>
>> That doesn't make much sense unless there's some special advantage to a
>> 4K blocksize with the hardware itself.
>
> Given that pgbench is always doing tiny updates to blocks, I wouldn't be
> surprised if switching to sm
On Mon, Aug 2, 2010 at 5:23 PM, Peter Hussey wrote:
> I already had effective_cache_size set to 500MB.
>
> I experimented with lowering random_page_cost to 3 then 2. It made no
> difference in the choice of plan that I could see. In the explain analyze
> output the estimated costs of nested loo
Peter Hussey writes:
> My questions are still
> 1) Does the planner have any component of cost calculations based on the
> size of work_mem,
Sure.
> and if so why do those calculations seem to have so
> little effect here?
Since you haven't provided sufficient information to let someone else
24 matches
Mail list logo