On 17/12/16 23:04, Tom Lane wrote:
> so that you don't need to use
> SELECT DISTINCT? The sort/unique steps needed to do DISTINCT are
> eating a large part of the runtime,
Does a hash join result in a set of buckets that are then read out
in order? It might, unless the sort method takes advantag
Gabliver Faluker writes:
> It runs for ~5 seconds .
I'm a little skeptical that a 12-way join producing 340K rows
and executing in 5 seconds should be considered "bad performance".
It looks like it'd help some if you increased work_mem enough to let
both sorts happen in-memory rather than extern
Hey All,
I am not a PG expert. I like PG but i am puzzled as to what I shoud do .
I have a 4 core 5 GIG vm running a 500M db (it should fit to ram easly) and
I face slow queries.
here is a view that I have :
SELECT o.id,
cc.name AS "from",
o.phone,
c.name AS "to",
parcel_info.v
Hi,
I have a tree-structure managed with ltree and gist index.
Simplified schema is
CREATE TABLE crt (
idcrt INT NOT NULL,
...
pathname LTREE
)
idcrt primary key and other index ix_crt_pathname on pathname with gist
CREATE TABLE doc (
iddoc INT NOT NULL, ...)
iddoc
On Fri, Oct 09, 2009 at 08:31:54PM +0800, Xia Qingran wrote:
> On Mon, Oct 5, 2009 at 9:58 AM, Omar Kilani wrote:
> > Hi Xia,
> >
> > Try this patch:
> >
> > http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch
> >
> > It's a hack, but it works for us. I think you're probably spending
> >
On Mon, Oct 5, 2009 at 9:58 AM, Omar Kilani wrote:
> Hi Xia,
>
> Try this patch:
>
> http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch
>
> It's a hack, but it works for us. I think you're probably spending
> most of your query time planning, and this patch helps speed things up
> 10x o
On Mon, Oct 5, 2009 at 1:24 PM, Omar Kilani wrote:
>
>
> I'm not really sure what the alternatives are -- it never really makes
> sense to get the selectivity for thousands of items in the IN clause.
> I've never seen a different plan for the same query against a DB with
> that patch vs without -
Robert,
On Mon, Oct 5, 2009 at 11:01 PM, Robert Haas wrote:
> On Sun, Oct 4, 2009 at 9:58 PM, Omar Kilani wrote:
>> Hi Xia,
>>
>> Try this patch:
>>
>> http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch
>>
>> It's a hack, but it works for us. I think you're probably spending
>> most o
On Sun, Oct 4, 2009 at 9:58 PM, Omar Kilani wrote:
> Hi Xia,
>
> Try this patch:
>
> http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch
>
> It's a hack, but it works for us. I think you're probably spending
> most of your query time planning, and this patch helps speed things up
> 10x o
Hi Xia,
Try this patch:
http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch
It's a hack, but it works for us. I think you're probably spending
most of your query time planning, and this patch helps speed things up
10x over here.
Regards,
Omar
On Sun, Sep 27, 2009 at 5:13 PM, Xia Qing
Xia Qingran wrote:
On Sun, Sep 27, 2009 at 1:03 AM, Tom Lane wrote:
Xia Qingran writes:
I have a big performance problem in my SQL select query:
select * from event where user_id in
(500,499,498, ... ,1,0);
The above SELECT always spends 1200ms.
Your EXPLAIN ANALYZE shows that the actual run
On Sat, Sep 26, 2009 at 10:59 PM, Craig James
wrote:
>
> If your user_id is always in a narrow range like this, or even in any range
> that is a small fraction of the total, then add a range condition, like
> this:
>
> select * from event where user_id <= 500 and user_id >= 0 and user_id in
> (...
On Sun, Sep 27, 2009 at 1:03 AM, Tom Lane wrote:
> Xia Qingran writes:
>> I have a big performance problem in my SQL select query:
>> select * from event where user_id in
>> (500,499,498, ... ,1,0);
>> The above SELECT always spends 1200ms.
>
> Your EXPLAIN ANALYZE shows that the actual runtime i
On 26-Sep-2009, at 10:16 PM, Claus Guttesen wrote:
I have a big performance problem in my SQL select query:
select * from event where user_id in
(500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474
Xia Qingran wrote:
Hi,
I have a big performance problem in my SQL select query:
select * from event where user_id in
(500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,4
if you reuse that set a lot, how about storing it in a table , and doing the
join on db side ? if it is large, it sometimes makes sense to create temp
table just for single query (I use that sort of stuff for comparing with few
M records).
But temp tables in that case have to be short lived, as the
Xia Qingran writes:
> I have a big performance problem in my SQL select query:
> select * from event where user_id in
> (500,499,498, ... ,1,0);
> The above SELECT always spends 1200ms.
Your EXPLAIN ANALYZE shows that the actual runtime is only about 240ms.
So either the planning time is about 10
Hi,
I have a big performance problem in my SQL select query:
select * from event where user_id in
(500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,462,46
> I have a big performance problem in my SQL select query:
>
>
> select * from event where user_id in
> (500,499,498,497,496,495,494,493,492,491,490,489,488,487,486,485,484,483,482,481,480,479,478,477,476,475,474,473,472,471,470,469,468,467,466,465,464,463,4
On Mon, Nov 17, 2008 at 6:14 PM, Dimi Paun <[EMAIL PROTECTED]> wrote:
>
> On Mon, 2008-11-17 at 10:40 -0700, Scott Marlowe wrote:
>> I'm guessing a fair bit of that time is pgadminIII prettifying the
>> output for you, etc. I.e. it's not all transfer time. Hard to say
>> without hooking some kind
On Mon, 2008-11-17 at 10:40 -0700, Scott Marlowe wrote:
> I'm guessing a fair bit of that time is pgadminIII prettifying the
> output for you, etc. I.e. it's not all transfer time. Hard to say
> without hooking some kind of profiler in pgadminIII. Is psql running
> local and pgadminIII remotely
On Nov 17, 2008, at 12:40 PM, Scott Marlowe wrote:
On Mon, Nov 17, 2008 at 10:31 AM, Dimi Paun <[EMAIL PROTECTED]> wrote:
On Mon, 2008-11-17 at 10:16 -0700, Scott Marlowe wrote:
Ahhh. Keep in mind that if you just run the query, pgadminIII will
tell you how long it took to run AND return al
On Mon, Nov 17, 2008 at 10:31 AM, Dimi Paun <[EMAIL PROTECTED]> wrote:
>
> On Mon, 2008-11-17 at 10:16 -0700, Scott Marlowe wrote:
>> Ahhh. Keep in mind that if you just run the query, pgadminIII will
>> tell you how long it took to run AND return all the data across the
>> network, so it will def
On Mon, 2008-11-17 at 10:16 -0700, Scott Marlowe wrote:
> Ahhh. Keep in mind that if you just run the query, pgadminIII will
> tell you how long it took to run AND return all the data across the
> network, so it will definitely take longer then. But most of that's
> network io wait so it's not a
On Mon, Nov 17, 2008 at 10:07 AM, Dimi Paun <[EMAIL PROTECTED]> wrote:
>
> On Mon, 2008-11-17 at 09:53 -0700, Scott Marlowe wrote:
>>
>> Are you saying it's excessive you need the compound query? Cause
>> that's running in 91microseconds as pointed out by Alan.
>
> Of course, my bad. I read that a
On Mon, 2008-11-17 at 09:53 -0700, Scott Marlowe wrote:
>
> Are you saying it's excessive you need the compound query? Cause
> that's running in 91microseconds as pointed out by Alan.
Of course, my bad. I read that as 91ms ().
Confusion came from the fact that pgadminIII reports the query
taki
On Mon, Nov 17, 2008 at 8:53 AM, Dimi Paun <[EMAIL PROTECTED]> wrote:
> Hi folks,
>
> I have a simple table that keeps track of a user's access history.
> It has a a few fields, but the important ones are:
> - ownerId: the user's ID, a int8
> - accessTS: the timestamp of the record
>
> The table
On Monday 17 November 2008, Dimi Paun <[EMAIL PROTECTED]> wrote:
>> It takes 0.091s (!):
> perpedes_db=# EXPLAIN ANALYZE SELECT * FROM triphistory WHERE ownerId =
> 10015 ORDER BY accessTS DESC LIMIT 5; QUERY PLAN
> -
>
Hi folks,
I have a simple table that keeps track of a user's access history.
It has a a few fields, but the important ones are:
- ownerId: the user's ID, a int8
- accessTS: the timestamp of the record
The table right now is small, only 1942 records.
The user I test with (10015) has only 89 en
Jignesh,
> Don't get me wrong. As Luke mentioned it took a while to get the
> potential of PostgreSQL on Solaris and people like me start doing other
> complex workarounds in Solaris like "forcedirectio", etc. (Yeah I did a
> test, if you force fsync as wal_sync_method while on Solaris, then
>
Hi Bruce,
I saw even on this alias also that people assumed that the default
wal_sync_method was fsync on Solaris.
I would select fsync or fdsync as the default on Solaris. (I prefer
fsync as it is already highlighted as default in postgresql)
Another thing to improve the defaults on Solar
On 4/12/06, Josh Berkus wrote:
> People,
>
> > Lately I find people are not so receptive to VxFS, and Sun is promoting
> > ZFS, and we don't have a reasonable near term option for Raw IO in
> > Postgres, so we need to work to find a reasonable path for Solaris users
> > IMO. The long delays in ZFS
Bruce Momjian wrote On 04/13/06 01:39 AM,:
>
> Yes, if someone wants to give us a clear answer on which wal_sync method
> is best on all versions of Solaris, we can easily make that change.
>
We're doing tests to see how various parameters in postgresql.conf
affect performance on Solaris and wi
Jignesh K. Shah wrote:
>
> Bruce,
>
> Hard to answer that... People like me who know and love PostgreSQL and
> Solaris finds this as an opportunity to make their favorite database
> work best on their favorite operating system.
>
> Many times PostgreSQL has many things based on assumption tha
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> Many times PostgreSQL has many things based on assumption that it will
> run on Linux and it is left to Solaris to emulate that behavior.
Au contraire --- PG tries its best to be OS-agnostic. I've personally
resisted people trying to optimize it b
Bruce,
Hard to answer that... People like me who know and love PostgreSQL and
Solaris finds this as an opportunity to make their favorite database
work best on their favorite operating system.
Many times PostgreSQL has many things based on assumption that it will
run on Linux and it is le
People,
> Lately I find people are not so receptive to VxFS, and Sun is promoting
> ZFS, and we don't have a reasonable near term option for Raw IO in
> Postgres, so we need to work to find a reasonable path for Solaris users
> IMO. The long delays in ZFS production haven't helped us there, as th
Bruce,
On 4/12/06 12:56 PM, "Bruce Momjian" wrote:
> It is hard to imagine why people spend so much time modifying Sun
> machines run with acceptable performance when non-Sun operating systems
> work fine without such hurtles.
There are a lot of Solaris customers that we support and that we'd l
Luke Lonergan wrote:
> Alvaro,
>
> On 4/5/06 2:48 PM, "Alvaro Herrera" <[EMAIL PROTECTED]> wrote:
>
> > This essentially means stopping all bgwriter activity, thereby deferring
> > all I/O until checkpoint. Was this considered? With
> > checkpoint_segments to 128, it wouldn't surprise me that t
> > Chris,
> > Just to make sure the x4100 config is similar to your Linux system, can
> > you verify the default setting for disk write cache and make sure they
> > are both enabled or disabled. Here's how to check in Solaris.
> > As root, run "format -e" -> pick a disk -> cache -> write_cache
> >Ok, so I did a few runs for each of the sync methods, keeping all the
> >rest constant and got this:
> >
> >open_datasync 0.7
> >fdatasync 4.6
> >fsync 4.5
> >fsync_writethrough not supported
> >open_sync 0.6
> >
> >in arbitrary units - higher is faster.
> >
Chris Mair wrote:
Ok, so I did a few runs for each of the sync methods, keeping all the
rest constant and got this:
open_datasync 0.7
fdatasync 4.6
fsync 4.5
fsync_writethrough not supported
open_sync 0.6
in arbitrary units - higher is faster.
Quite impress
Mark, Chris,
> Yeah - looks good! (is the default open_datasync still?). Might be worth
> trying out the fdatasync method too (ISTR this being quite good... again
> on Solaris 8, so things might have changed)!
I was just talking to a member of the Solaris-UFS team who recommended that we
test fd
Chris,
> Remounting the fs where $PGDATA lives with "forcedirectio"
> (together with logging, that is default) did not help
> (if not harm...) performance.
Not all of PG. JUST pg_xlog. forcedirectio is only a good idea for the xlog.
> Quickly playing around with wal_buffers on Linux and Mac OS
Chris,
On 4/5/06 2:31 PM, "Chris Mair" <[EMAIL PROTECTED]> wrote:
> Doing what http://blogs.sun.com/roller/page/jkshah suggests:
> wal_sync_method = fsync (unchanged)
> wal_buffers = 128 (was 8)
> checkpoint_segments = 128 (was 3)
> bgwriter_all_percent = 0 (was 0.333)
> bgwriter_all_ma
> > Yeah - looks good! (is the default open_datasync still?). Might be worth
> > trying out the fdatasync method too (ISTR this being quite good... again
> > on Solaris 8, so things might have changed)!
>
> I was just talking to a member of the Solaris-UFS team who recommended that
> we
> test
Hi,
thanks for all replys.
I've done a few tests.
Remounting the fs where $PGDATA lives with "forcedirectio"
(together with logging, that is default) did not help
(if not harm...) performance.
Doing what http://blogs.sun.com/roller/page/jkshah suggests:
wal_sync_method = fsync (unchanged)
w
Chris Mair wrote:
(but note the other mail about wal_sync_method = fsync)
Yeah - looks good! (is the default open_datasync still?). Might be worth
trying out the fdatasync method too (ISTR this being quite good... again
on Solaris 8, so things might have changed)!
Cheers
Mark
> > I've done a few tests.
> >
> > Remounting the fs where $PGDATA lives with "forcedirectio"
> > (together with logging, that is default) did not help
> > (if not harm...) performance.
> >
> >
>
> Sure - forcedirectio on the entire $PGDATA is a definite loss, you only
> want it on $PGDATA/pg_
Alvaro,
On 4/5/06 2:48 PM, "Alvaro Herrera" <[EMAIL PROTECTED]> wrote:
> This essentially means stopping all bgwriter activity, thereby deferring
> all I/O until checkpoint. Was this considered? With
> checkpoint_segments to 128, it wouldn't surprise me that there wasn't
> any checkpoint execut
appears this didn't make it to the list... resending to the list
directly...
---
> > > Doing what http://blogs.sun.com/roller/page/jkshah suggests:
> > > wal_sync_method = fsync (unchanged)
> > > wal_buffers = 128 (was 8)
> > > checkpoint_segments = 128 (was 3)
> > > bgwriter_all_percent =
Chris Mair wrote:
Hi,
thanks for all replys.
I've done a few tests.
Remounting the fs where $PGDATA lives with "forcedirectio"
(together with logging, that is default) did not help
(if not harm...) performance.
Sure - forcedirectio on the entire $PGDATA is a definite loss, you only
want i
> > > Doing what http://blogs.sun.com/roller/page/jkshah suggests:
> > > wal_sync_method = fsync (unchanged)
> > > wal_buffers = 128 (was 8)
> > > checkpoint_segments = 128 (was 3)
> > > bgwriter_all_percent = 0 (was 0.333)
> > > bgwriter_all_maxpages = 0 (was 5)
> > > and leaving everyt
Luke Lonergan wrote:
> Chris,
>
> On 4/5/06 2:31 PM, "Chris Mair" <[EMAIL PROTECTED]> wrote:
>
> > Doing what http://blogs.sun.com/roller/page/jkshah suggests:
> > wal_sync_method = fsync (unchanged)
> > wal_buffers = 128 (was 8)
> > checkpoint_segments = 128 (was 3)
> > bgwriter_all_perc
Mark,
> I suspect that making a *separate* filesystem for the pg_xlog directory
> and mounting that logging + forcedirectio would be a nice way to also
> get performance while keeping the advantages of logging + file
> buffercache for the *rest* of the postgres components.
> Cheers
Yes, we tested
Chris Mair wrote:
Hi,
I've got a somewhat puzzling performance problem here.
I'm trying to do a few tests with PostgreSQL 8.1.3 under Solaris
(an OS I'm sort of a newbie in).
The machine is a X4100 and the OS is Solaris 10 1/06 fresh install
according to manual. It's got two SAS disks in RAID
Title: Re: [PERFORM] bad performance on Solaris 10
Jignesh’s blog has some of the good stuff in it:
http://blogs.sun.com/roller/page/jkshah
- Luke
On 4/3/06 5:49 PM, "Josh Berkus" wrote:
Chris,
> Eons ago PCs had those "turbo" switches (it was never totally cl
Chris,
> Eons ago PCs had those "turbo" switches (it was never totally clear
> why they put them there in the first place, anyway). I've this bad
> feeling there's a secret "turbo" switch I can't spot hidden somewhere
> in Solaris :/
Yes. Check out Jignesh's configuration advice ach, this
Hi,
I've got a somewhat puzzling performance problem here.
I'm trying to do a few tests with PostgreSQL 8.1.3 under Solaris
(an OS I'm sort of a newbie in).
The machine is a X4100 and the OS is Solaris 10 1/06 fresh install
according to manual. It's got two SAS disks in RAID 1, 4GB of RAM.
Now
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
running 7.4.x I still have problem
with the select but I do not find any solution apart to rise to 0.7 the
cpu_tuple_cost, I'm reposting it in the hope to discover a glitch in
the planner.
# explain analyze select * from v_sc_user_request whe
Vitaly Belman <[EMAIL PROTECTED]> writes:
> What am I to do?
Reduce random_page_cost and/or increase effective_cache_size.
regards, tom lane
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PRO
Here's the query:
---
SELECT * FROM bv_reviews r, bv_votes v
WHERE r.vote_id = v.vote_id
AND v.book_id = 113
---
bv_votes has around 7000 rows with the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
A Dimecres 16 Juliol 2003 16:38, Tom Lane va escriure:
> Albert Cervera Areny <[EMAIL PROTECTED]> writes:
> > I have a performance problem using postgresql when the connection is
> > made via ODBC with a windows machine using the latests ODBC drivers
Albert Cervera Areny <[EMAIL PROTECTED]> writes:
> I have a performance problem using postgresql when the connection is made
> via ODBC with a windows machine using the latests ODBC drivers (Windows) and
> PostgreSQL 7.3.3 (Linux).
Do you have logging turned on in the ODBC driver? I recall he
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I have a performance problem using postgresql when the connection is made
via ODBC with a windows machine using the latests ODBC drivers (Windows) and
PostgreSQL 7.3.3 (Linux).
The queries made by my Visual Basic program are very very simple
65 matches
Mail list logo