Re: [PERFORM] help: function failing

2014-10-07 Thread Andrew Dunstan
On 10/07/2014 04:41 PM, Sergey Konoplev wrote: On Thu, Oct 2, 2014 at 4:00 PM, George Neuner wrote: --- code CREATE OR REPLACE FUNCTION gen_random() RETURNS double precision AS $BODY$ DECLARE num float8 := 0; den float8 := 281474976710655; -- 0xF

Re: [PERFORM] help: function failing

2014-10-07 Thread Sergey Konoplev
On Thu, Oct 2, 2014 at 4:00 PM, George Neuner wrote: > --- code > CREATE OR REPLACE FUNCTION gen_random() > RETURNS double precision AS > $BODY$ > DECLARE >num float8 := 0; >den float8 := 281474976710655; -- 0x >bytes bytea[6]; > BEGIN >

[PERFORM] help: function failing

2014-10-07 Thread George Neuner
Hi, I am using Postgresql 9.3.5 on Ubuntu and I have a sudden, unexplained failure in a function that has been working for a long time. --- code CREATE OR REPLACE FUNCTION gen_random() RETURNS double precision AS $BODY$ DECLARE num float8 := 0; den floa

[PERFORM] Help on migrating data from MSSQL2008R2 to PostgreSQL 9.3

2014-04-22 Thread Wureka JI
To whom it may concern, My question and problem is posted on below site: http://stackoverflow.com/questions/23147724/postgresql-9-3-on-ubuntu-server-12-04-v-s-ms-sql-server-2008-r2-on-windows-7-ul Would you please help me to solve my problem. Thank you for your help. Alex, regard

Re: [PERFORM] Help me understand why my subselect is an order of magnitude faster than my nested joins

2014-03-18 Thread Vincent
On 03-03-14 19:24, Eli Naeher wrote: Hello, I have two versions of essentially the same query; one using nested joins, the other using subselects. The version using the subselect is roughly an order of magnitude faster (~70ms on my box and data vs ~900ms for the nested joins). Of course the o

[PERFORM] Help me understand why my subselect is an order of magnitude faster than my nested joins

2014-03-05 Thread Eli Naeher
Hello, I have two versions of essentially the same query; one using nested joins, the other using subselects. The version using the subselect is roughly an order of magnitude faster (~70ms on my box and data vs ~900ms for the nested joins). Of course the obvious answer here is just to use the fast

Re: [PERFORM] Help with optimizing a query over hierarchical data

2014-03-04 Thread Damon Snyder
Hi Claudio, Thanks for the help! Damon On Mon, Mar 3, 2014 at 8:20 PM, Claudio Freire wrote: > On Mon, Mar 3, 2014 at 10:12 PM, Damon Snyder > wrote: > > > >> Um... I think your problem is a misuse of CTE. Your CTE is building an > > intermediate of several thousands of rows only to select a d

Re: [PERFORM] Help with optimizing a query over hierarchical data

2014-03-03 Thread Claudio Freire
On Mon, Mar 3, 2014 at 10:12 PM, Damon Snyder wrote: > >> Um... I think your problem is a misuse of CTE. Your CTE is building an > intermediate of several thousands of rows only to select a dozen > afterwards. You may want to consider a view or subquery, though I'm > not sure pg will be able to op

Re: [PERFORM] Help with optimizing a query over hierarchical data

2014-03-03 Thread Damon Snyder
Hi Claudio, See my comments inline below. > Um... I think your problem is a misuse of CTE. Your CTE is building an intermediate of several thousands of rows only to select a dozen afterwards. You may want to consider a view or subquery, though I'm not sure pg will be able to optimize much given yo

Re: [PERFORM] Help with optimizing a query over hierarchical data

2014-03-03 Thread Claudio Freire
Um... I think your problem is a misuse of CTE. Your CTE is building an intermediate of several thousands of rows only to select a dozen afterwards. You may want to consider a view or subquery, though I'm not sure pg will be able to optimize much given your use of window functions, which forces a ma

Re: [PERFORM] Help with optimizing a query over hierarchical data

2014-03-03 Thread Damon Snyder
Hi Claudio, Thanks for responding. Here is the explain (http://explain.depesz.com/s/W3W) for the ordering by meta container starting on line 192 ( https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L192 ). Here is the explain (http://explain.depesz.com/s/d1O) for the ordering

Re: [PERFORM] Help with optimizing a query over hierarchical data

2014-03-01 Thread Claudio Freire
On Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder wrote: > The primary query that I'm trying to optimize executes in about 1600ms on my > laptop and about 800ms on production-like hardware (more for the score > version). My target is to get the data fetch down below 100ms if possible. Could you post

[PERFORM] Help with optimizing a query over hierarchical data

2014-03-01 Thread Damon Snyder
Hi Everyone, We have a data set and access pattern that is causing us some performance issues. The data set is hierarchical with about 2 million rows at the lowest level (object), followed by 500k at the next level (container) and approximately 10 at the highest level (category). The way the data

[PERFORM] Help with cursor query that is intermittently slow

2013-12-19 Thread Drew Jetter
Hi, I have a long query that returns an extremely large result set. In my application, I would like to report the results as they come in, so I am creating a cursor and fetching 1000 rows at a time. After I declare the cursor (declare C cursor for), I call "fetch 1000 from C" over and over. Usu

Re: [PERFORM] help on slow query using postgres 8.4

2012-11-20 Thread Kevin Grittner
Maria L. Wilson wrote: > Can someone shed some light on the following query. > any help would certainly be appreciated! The query text and EXPLAIN ANALYZE output are a good start, but a lot of other information is needed to really understand the issue. http://wiki.postgresql.org/wiki/SlowQue

[PERFORM] help on slow query using postgres 8.4

2012-11-19 Thread Maria L. Wilson
Can someone shed some light on the following query. any help would certainly be appreciated! thanks - * Maria Wilson Nasa/Langley Research Center Hampton, Virginia m.l.wil...@nasa.gov * explain analyze select a.ID, a.provider, a.hostname, a.username, a.eventTimeStamp, a.AIPGUID, a.

Re: [PERFORM] help with too slow query

2012-11-06 Thread Willem Leenen
@Victor, Spasibo for the information, seems valid to me. Regards, Willem Leenen > Date: Tue, 6 Nov 2012 16:52:01 +0200 > Subject: Re: [PERFORM] help with too slow query > From: vyego...@gmail.com > To: willem_lee...@hotmail.com > CC: p.jime...@ismsolar.com; pg

Re: [PERFORM] help with too slow query

2012-11-06 Thread Виктор Егоров
2012/11/6 Willem Leenen : > @Victor, > > Is the reason of the wrong cardinality estimations of the join indeed due to > wrong statistics? I thought that the full table scan was due to the index on > the timefield couldn't be used with this predicate: > > time_stamp > date_trunc('month', current_dat

Re: [PERFORM] help with too slow query

2012-11-06 Thread Willem Leenen
, 6 Nov 2012 14:17:07 +0200 > Subject: Re: [PERFORM] help with too slow query > From: vyego...@gmail.com > To: p.jime...@ismsolar.com > CC: pgsql-performance@postgresql.org > > 2012/11/6 Pedro Jiménez Pérez > > Ok, here we go: > > > > I'm using postgresql vers

Re: [PERFORM] help with too slow query

2012-11-06 Thread Виктор Егоров
2012/11/6 Pedro Jiménez Pérez > Ok, here we go: > > I'm using postgresql version 8.0 > > Here is my query that is too slow: http://explain.depesz.com/s/GbQ Well, I would start with a note, that 8.0 is not supported anymore: http://www.postgresql.org/support/versioning/ Please, consider upgradin

Re: [PERFORM] help with too slow query

2012-11-06 Thread Pedro Jiménez Pérez
Title: Documento sin título Ok, here we go: I'm using postgresql version 8.0 Here is my query that is too slow:  http://explain.depesz.com/s/GbQ *** EXPLAIN analyze select round(CAST(sum(var_value) AS nume

[PERFORM] help with too slow query

2012-11-05 Thread Pedro Jiménez Pérez
Title: Documento sin título Hello,   I have this table definition: CREATE TABLE ism_floatvalues (   id_signal bigint NOT NULL, -- Indica la señal a la que pertenece este valor. Clave foránea que referencia al campo id_signal de la tabla ism_signal.   time_stam

Re: [PERFORM] help with too slow query

2012-11-05 Thread Albe Laurenz
Pedro Jiménez Pérez wrote: > Sent: Friday, November 02, 2012 1:14 PM > To: pgsql-performance@postgresql.org > Subject: [PERFORM] help with too slow query > > Hello, > I have this table definition: > CREATE TABLE ism_floatvalues > ( > id_signal bigint NOT NULL,

Re: [PERFORM] help with too slow query

2012-11-05 Thread Виктор Егоров
2012/11/2 Pedro Jiménez Pérez > I have this table definition: > 1) Could you kindly include also information bout ism_signal and ism_installation tables? 2) Please, follow this guide to provide more input: http://wiki.postgresql.org/wiki/Slow_Query_Questions -- Victor Y. Yegorov

Re: [PERFORM] HELP!!!-----Need to Sql commands to monitoring Postgresql

2012-09-06 Thread Daniel Farina
On Thu, Sep 6, 2012 at 6:50 PM, charles_xie wrote: > Hi, >Thanks for your advice.i know the basic monitoring skill,because the > postgresql database is used for the factory production , so I hope they can > run normal and exert more perfect performance. so i need to be considered > from the p

Re: [PERFORM] HELP!!!-----Need to Sql commands to monitoring Postgresql

2012-09-06 Thread charles_xie
Hi, Thanks for your advice.i know the basic monitoring skill,because the postgresql database is used for the factory production , so I hope they can run normal and exert more perfect performance. so i need to be considered from the point of view ,eg : threading ,locks and so on. Daniel Farina

Re: [PERFORM] HELP!!!-----Need to Sql commands to monitoring Postgresql

2012-09-06 Thread Steven Crandell
Also probably some good info to be mined out of postbix. http://www.zabbix.com/wiki/howto/monitor/db/postbix/monitor_postgres_with_zabbix On Thu, Sep 6, 2012 at 12:44 PM, Josh Berkus wrote: > On 9/4/12 12:12 AM, charles_xie wrote: > > Hi all, > > i have 5 servers that have been installi

Re: [PERFORM] HELP!!!-----Need to Sql commands to monitoring Postgresql

2012-09-06 Thread Josh Berkus
On 9/4/12 12:12 AM, charles_xie wrote: > Hi all, > i have 5 servers that have been installing postgresql .In order to > know the postgresql working status and monitor them ,moreover i don't want > to use the monitor tools .I want to use the SQL commands to monitoring > postgresql system

Re: [PERFORM] HELP!!!-----Need to Sql commands to monitoring Postgresql

2012-09-06 Thread Daniel Farina
On Tue, Sep 4, 2012 at 12:12 AM, charles_xie wrote: > Hi all, > i have 5 servers that have been installing postgresql .In order to > know the postgresql working status and monitor them ,moreover i don't want > to use the monitor tools .I want to use the SQL commands to monitoring > pos

[PERFORM] HELP!!!-----Need to Sql commands to monitoring Postgresql

2012-09-05 Thread charles_xie
Hi all, i have 5 servers that have been installing postgresql .In order to know the postgresql working status and monitor them ,moreover i don't want to use the monitor tools .I want to use the SQL commands to monitoring postgresql system . please suggest any SQL COMMANDS to work succes

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-05-11 Thread Tom Lane
Scott Marlowe writes: > On Tue, Apr 5, 2011 at 1:25 PM, Maria L. Wilson > wrote: > This bit: >> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR >> INVS > has both an explicit and an implicit join. This can constrain join > re-ordering in the planner. Can you change it

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-05-11 Thread Robert Haas
On Tue, May 10, 2011 at 2:20 PM, Maria L. Wilson wrote: > haven't tested a composite index > > invsensor is 2,003,980 rows and 219MB > granver is 5,138,730 rows and 556MB > the machine has 32G memory > seq_page_cost, random_page_costs & effective_cache_size are set to the > defaults (1,4, and 128M

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-05-11 Thread Scott Marlowe
On Tue, Apr 5, 2011 at 1:25 PM, Maria L. Wilson wrote: This bit: > left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR > INVS has both an explicit and an implicit join. This can constrain join re-ordering in the planner. Can you change it to explicit joins only and see if

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-05-10 Thread Maria L. Wilson
haven't tested a composite index invsensor is 2,003,980 rows and 219MB granver is 5,138,730 rows and 556MB the machine has 32G memory seq_page_cost, random_page_costs & effective_cache_size are set to the defaults (1,4, and 128MB) - looks like they could be bumped up. Got any recommendations?

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-05-10 Thread Maria L. Wilson
thanks for taking a look at this and it's never too late!! I've tried bumping up work_mem and did not see any improvements - All the indexes do exist that you asked see below Any other ideas? CREATE INDEX invsnsr_idx1 ON invsensor USING btree (granule_id); CREATE INDEX invsns

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-05-10 Thread Robert Haas
[ woops, accidentally replied off-list, trying again ] On Tue, May 10, 2011 at 1:47 PM, Maria L. Wilson wrote: > thanks for taking a look at this and it's never too late!! > > I've tried bumping up work_mem and did not see any improvements - > All the indexes do exist that you asked see

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-05-10 Thread Robert Haas
On Tue, Apr 5, 2011 at 3:25 PM, Maria L. Wilson wrote: > Would really appreciate someone taking a look at the query below  Thanks > in advance! > > > this is on a linux box... > Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37 > EST 2009 x86_64 x86_64 x86_64 GNU/Linux

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-04-07 Thread Maria L. Wilson
yep - we use analyze and check the output. It's version 8.4.5 so no fsm issues. thanks, Maria On 4/6/11 11:33 AM, Kevin Grittner wrote: "Maria L. Wilson" wrote: Autovacuum is not running - but regular vacuums are being done twice daily. Is the ANALYZE keyword used on those VACUUM runs?

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-04-06 Thread Tomas Vondra
Dne 6.4.2011 17:33, Kevin Grittner napsal(a): > "Maria L. Wilson" wrote: > >> Autovacuum is not running - but regular vacuums are being done >> twice daily. > > Is the ANALYZE keyword used on those VACUUM runs? What version of > PostgreSQL is this. If it's enough to need fsm settings, do you

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-04-06 Thread Kevin Grittner
"Maria L. Wilson" wrote: > Autovacuum is not running - but regular vacuums are being done > twice daily. Is the ANALYZE keyword used on those VACUUM runs? What version of PostgreSQL is this. If it's enough to need fsm settings, do you run with the VERBOSE option and check the end of the outp

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-04-06 Thread Maria L. Wilson
thanks for the reply, Tomas. I'll test bumping up work_mem and see how that helps. thanks again, Maria Wilson On 4/6/11 9:16 AM, t...@fuzzy.cz wrote: some additional info. the table inventory is about 4481 MB and also has postgis types. the table gran_ver is about 523 MB the table IN

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-04-06 Thread Maria L. Wilson
Autovacuum is not running - but regular vacuums are being done twice daily. indexes on inventory: CREATE INDEX inven_idx1 ON inventory USING btree (inv_id); CREATE UNIQUE INDEX inven_idx2 ON inventory USING btree (granule_id); indexes on gran_ver: CREATE UNIQUE INDEX granver_idx1

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-04-06 Thread tv
> some additional info. > the table inventory is about 4481 MB and also has postgis types. > the table gran_ver is about 523 MB > the table INVSENSOR is about 217 MB > > the server itself has 32G RAM with the following set in the postgres conf > shared_buffers = 3GB > work_mem = 64MB > maintena

Re: [PERFORM] help speeding up a query in postgres 8.4.5

2011-04-06 Thread Szymon Guz
On 5 April 2011 21:25, Maria L. Wilson wrote: > Would really appreciate someone taking a look at the query below > Thanks in advance! > > > this is on a linux box... > Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37 > EST 2009 x86_64 x86_64 x86_64 GNU/Linux > > expl

[PERFORM] help speeding up a query in postgres 8.4.5

2011-04-06 Thread Maria L. Wilson
Would really appreciate someone taking a look at the query below Thanks in advance! this is on a linux box... Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37 EST 2009 x86_64 x86_64 x86_64 GNU/Linux explain analyze select MIN(IV.STRTDATE), MAX(IV.ENDDATE) from G

Re: [PERFORM] Help: massive parallel update to the same table

2011-03-22 Thread Red Maple
Hi, I have found the bug in my code that made the update to the same row in the table instead of two different row. Now I have all cores up and running 100%. Thank you for all your help. On Fri, Mar 18, 2011 at 3:21 PM, Kevin Grittner wrote: > Red Maple wrote: > > > Here is my function. If I

Re: [PERFORM] Help: massive parallel update to the same table

2011-03-18 Thread Kevin Grittner
Red Maple wrote: > Here is my function. If I comment out the update then it would run > all the cores, if not then only one core will run > CREATE OR REPLACE FUNCTION > [...] > select sysuptime > into this_sysuptime > from ap_sysuptime > where ap_id = this_

Re: [PERFORM] Help: massive parallel update to the same table

2011-03-18 Thread Red Maple
Hi, Here is my function. If I comment out the update then it would run all the cores, if not then only one core will run CREATE OR REPLACE FUNCTION my_update_device(this_mac text, number_of_devices integer, this_sysuptime integer) RETURNS integer AS $BODY$ DECLARE fake_mac m

Re: [PERFORM] Help: massive parallel update to the same table

2011-03-18 Thread Kevin Grittner
[rearranged - please don't top-post] [also, bringing this back to the list - please keep the list copied] Red Maple wrote: > Kevin Grittner wrote: >> It should be parallel by default. Are you taking out any >> explicit locks? > my clients use psql to remotely run an update function on the

Re: [PERFORM] Help with Query Tuning

2011-03-18 Thread tv
> Thanks , it works now .. :-) > > Here is the output : > > pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like > '%2010%' and > pdc_uima-# content_language='en' and content is not null and > isprocessable = 1 and > pdc_uima-# to_tsvector('english',content) @@ > to_tsquery('eng

Re: [PERFORM] Help: massive parallel update to the same table

2011-03-18 Thread Nicholson, Brad (Toronto, ON, CA)
>From: pgsql-performance-ow...@postgresql.org >[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Red Maple >Sent: Friday, March 18, 2011 9:05 AM >To: pgsql-performance@postgresql.org >Subject: [PERFORM] Help: massive parallel update to the same table > >Hi all

Re: [PERFORM] Help: massive parallel update to the same table

2011-03-18 Thread Kevin Grittner
Red Maple wrote: > Our system has a postgres database that has a table for statistic > which is updated every hour by about 10K clients. Each client only > make update to its own row in the table. So far I am only seeing > one core out of eight cores on my server being active which tells > me th

[PERFORM] Help: massive parallel update to the same table

2011-03-18 Thread Red Maple
Hi all, Our system has a postgres database that has a table for statistic which is updated every hour by about 10K clients. Each client only make update to its own row in the table. So far I am only seeing one core out of eight cores on my server being active which tells me that the update is bein

Re: [PERFORM] Help with Query Tuning

2011-03-18 Thread Reid Thompson
On 03/18/2011 12:17 AM, Adarsh Sharma wrote: Thanks , it works now ..:-) Here is the output : pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like '%2010%' and pdc_uima-# content_language='en' and content is not null and isprocessable = 1 and pdc_uima-# to_tsvector('engli

Re: [PERFORM] Help with Query Tuning

2011-03-17 Thread Adarsh Sharma
Thanks , it works now .. :-) Here is the output : pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like '%2010%' and pdc_uima-# content_language='en' and content is not null and isprocessable = 1 and pdc_uima-# to_tsvector('english',content) @@ to_tsquery('english','Mujahid

Re: [PERFORM] Help with Query Tuning

2011-03-17 Thread tv
> *Modified Query :- > > *SELECT count(*) from page_content > WHERE publishing_date like '%2010%' and content_language='en' and > content is not null and isprocessable = 1 and > to_tsvectOR('english',content) @@ to_tsquery('english','Mujahid' || > 'jihad' || 'Militant' || 'fedayeen' || 'insurgent'

Re: [PERFORM] Help with Query Tuning

2011-03-16 Thread Adarsh Sharma
Thanks, I understand it know :- But My one doubt which isn't clear : *Original Query :-* select count(*) from page_content where (content like '%Militant%' OR content like '%jihad%' OR content like '%Mujahid%' OR content like '%fedayeen%' OR content like '%insurgent%' OR content like '%t

Re: [PERFORM] Help with Query Tuning

2011-03-16 Thread Adarsh Sharma
Thanks Marshall, would I need to change the data type of *content *column to tsvector and create a Gist Index on it. Best Regards, Adarsh Kenneth Marshall wrote: On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote: Dear all, I am facing a problem while creating the index to m

Re: [PERFORM] Help with Query Tuning

2011-03-16 Thread Reid Thompson
On 03/16/2011 05:13 AM, Adarsh Sharma wrote: Dear all, I am facing a problem while creating the index to make the below query run faster. My table size is near about 1065 MB and 428467 rows. explain analyze select count(*) from page_content where publishing_date like '%2010%' and content_lang

Re: [PERFORM] Help with Query Tuning

2011-03-16 Thread Kenneth Marshall
On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote: > Dear all, > > I am facing a problem while creating the index to make the below query run > faster. My table size is near about 1065 MB and 428467 rows. > > explain analyze select count(*) from page_content where publishing_date

[PERFORM] Help with Query Tuning

2011-03-16 Thread Adarsh Sharma
Dear all, I am facing a problem while creating the index to make the below query run faster. My table size is near about 1065 MB and 428467 rows. explain analyze select count(*) from page_content where publishing_date like '%2010%' and content_language='en' and content is not null and i

Re: [PERFORM] Help with bulk read performance

2010-12-17 Thread Nick Matheson
Pierre- I agree with your observation of float <-> text conversion costs, but in this case Dan is talking about storing the raw float data (ie: 4 bytes per float) in a bytea array so there is only the conversion from java float[n] to java byte[4*n] which is not nearly as costly as float <-> t

Re: [PERFORM] Help with bulk read performance

2010-12-16 Thread Pierre C
If the data are stored as a byte array but retrieve into a ResultSet, the unpacking time goes up by an order of magnitude and the observed total throughput is 25 MB/s. If the data are stored in a Postgres float array and unpacked into a byte stream, the observed throughput is 20 MB/s. fl

Re: [PERFORM] Help with bulk read performance

2010-12-16 Thread Krzysztof Nienartowicz
Hello Daniel, We have the same scenario for the native Java arrays, so we are storing bytea and doing conversion at the client side, but for the server side SQL, plJava comes very handy: No sure how you want to create stored procedures to convert internally but this is how we do this: One has

Re: [PERFORM] Help with bulk read performance

2010-12-15 Thread Dan Schaffer
Hi, My name is Dan and I'm a co-worker of Nick Matheson who initially submitted this question (because the mail group had me blacklisted for awhile for some reason). Thank you for all of the suggestions. We were able to improve out bulk read performance from 3 MB/s to 60 MB/s (assuming the da

Re: [PERFORM] Help with bulk read performance

2010-12-14 Thread Nick Matheson
Hey all- Glad to know you are still interested... ;) Didn't mean to leave you hanging, the holiday and all have put some bumps in the road. Dan my co-worker might be able to post some more detailed information here, but here is a brief summary of what I am aware of: 1. We have not tested a

Re: [PERFORM] Help with bulk read performance

2010-12-14 Thread Jim Nasby
BTW, have you tried prepared statements? bytea is most likely faster (in part) due to less parsing in the backend. Prepared statements would eliminate that parsing step. On Dec 14, 2010, at 10:07 AM, Nick Matheson wrote: > Hey all- > > Glad to know you are still interested... ;) > > Didn't me

Re: [PERFORM] Help with bulk read performance

2010-12-14 Thread Andy Colson
On 12/14/2010 9:41 AM, Jim Nasby wrote: On Dec 14, 2010, at 9:27 AM, Andy Colson wrote: Is this the same thing Nick is working on? How'd he get along? http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov So it is. The one I replied to stood out because no one had replied to it

Re: [PERFORM] Help with bulk read performance

2010-12-14 Thread Jim Nasby
On Dec 14, 2010, at 9:27 AM, Andy Colson wrote: > Is this the same thing Nick is working on? How'd he get along? > > http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov So it is. The one I replied to stood out because no one had replied to it; I didn't see the earlier email. --

Re: [PERFORM] Help with bulk read performance

2010-12-14 Thread Andy Colson
On 11/1/2010 9:15 AM, Dan Schaffer wrote: Hello We have an application that needs to do bulk reads of ENTIRE Postgres tables very quickly (i.e. select * from table). We have observed that such sequential scans run two orders of magnitude slower than observed raw disk reads (5 MB/s versus 100 MB/

Re: [PERFORM] Help with bulk read performance

2010-12-13 Thread Jim Nasby
On Nov 1, 2010, at 9:15 AM, Dan Schaffer wrote: > We have an application that needs to do bulk reads of ENTIRE Postgres tables > very quickly (i.e. select * from table). We have observed that such > sequential scans run two orders of magnitude slower than observed raw disk > reads (5 MB/s versu

[PERFORM] Help with bulk read performance

2010-11-08 Thread Dan Schaffer
Hello We have an application that needs to do bulk reads of ENTIRE Postgres tables very quickly (i.e. select * from table). We have observed that such sequential scans run two orders of magnitude slower than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is due to the storage

Re: [PERFORM] Help with duration of statement: EXECUTE [PREPARE: COMMIT]

2010-10-19 Thread Tom Lane
Eric Comeau writes: > Okay - I'm starting to see other stmts other than just commits taking > longer than 5 secs sometimes as well now - stress test has been running > for 3 days now...some commits 17 and 15 secs ouch... If it's not just commits then some of the stranger theories go away. I thi

Re: [PERFORM] Help with duration of statement: EXECUTE [PREPARE: COMMIT]

2010-10-19 Thread Eric Comeau
On 10-10-18 11:02 AM, Tom Lane wrote: Mladen Gogala writes: Tom Lane wrote: My guess would be overstressed disk subsystem. A COMMIT doesn't require much except fsync'ing the commit WAL record down to disk ... Doesn't the "commit" statement also release all the locks held by the transaction

Re: [PERFORM] Help with duration of statement: EXECUTE [PREPARE: COMMIT]

2010-10-18 Thread Tom Lane
Mladen Gogala writes: > Tom Lane wrote: >> My guess would be overstressed disk subsystem. A COMMIT doesn't require >> much except fsync'ing the commit WAL record down to disk ... > Doesn't the "commit" statement also release all the locks held by the > transaction? Yeah, and there's a nontriv

Re: [PERFORM] Help with duration of statement: EXECUTE [PREPARE: COMMIT]

2010-10-18 Thread Mladen Gogala
Tom Lane wrote: My guess would be overstressed disk subsystem. A COMMIT doesn't require much except fsync'ing the commit WAL record down to disk ... Doesn't the "commit" statement also release all the locks held by the transaction? -- Mladen Gogala Sr. Oracle DBA 1500 Broadway New York, N

Re: [PERFORM] Help with duration of statement: EXECUTE [PREPARE: COMMIT]

2010-10-18 Thread Tom Lane
Eric Comeau writes: > 2010-10-16 05:55:52 EDT [6334]: [1-1] LOG: duration: 5572.517 ms > statement: EXECUTE [PREPARE: COMMIT] > 2010-10-16 06:06:24 EDT [26856]: [1-1] LOG: duration: 5617.866 ms > statement: EXECUTE [PREPARE: COMMIT] > 2010-10-16 06:06:24 EDT [20740]: [13-1] LOG: duratio

[PERFORM] Help with duration of statement: EXECUTE [PREPARE: COMMIT]

2010-10-17 Thread Eric Comeau
We currently have log_min_duration_statement = 5000 and are seeing statements like the following logged 2010-10-16 05:55:52 EDT [6334]: [1-1] LOG: duration: 5572.517 ms statement: EXECUTE [PREPARE: COMMIT] 2010-10-16 06:06:24 EDT [26856]: [1-1] LOG: duration: 5617.866 ms statement: EXE

Re: [PERFORM] help with understanding EXPLAIN and boosting performance

2010-10-15 Thread Tom Lane
Brandon Casci writes: > I have an application hosted on Heroku. They use postgres. It's more or less > abstracted away, but I can get some performance data from New Relic. For the > most part, performance is ok, but now and then some queries take a few > seconds, and spike up to 15 or even 16 seco

[PERFORM] help with understanding EXPLAIN

2010-10-15 Thread Brandon Casci
Hello I have an application hosted on Heroku. They use postgres. It's more or less abstracted away, but I can get some performance data from New Relic. For the most part, performance is ok, but now and then some queries take a few seconds, and spike up to 15 or even 16 seconds! Ouch! This is the

[PERFORM] help with understanding EXPLAIN and boosting performance

2010-10-15 Thread Brandon Casci
Hello I have an application hosted on Heroku. They use postgres. It's more or less abstracted away, but I can get some performance data from New Relic. For the most part, performance is ok, but now and then some queries take a few seconds, and spike up to 15 or even 16 seconds! Ouch! This is the

[PERFORM] HELP: How to tame the 8.3.x JDBC driver with a biq guery result set

2010-04-20 Thread Dave Crooke
Hey folks I am trying to do a full table scan on a large table from Java, using a straightforward "select * from foo". I've run into these problems: 1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in *java.lang.OutOfMemoryError* ... this is not cool, i

Re: [PERFORM] Help me put 2 Gigs of RAM to use

2009-12-10 Thread Mark Stosberg
Thanks for the response, Matthew. > On Thu, 10 Dec 2009, Mark Stosberg wrote: > > What I'm noticing is that the while the FreeBSD server has 4 Gigs of > > memory, there are rarely every more than 2 in use-- the memory use > > graphs as being rather constant. My goal is to make good use of those 2

Re: [PERFORM] Help: how to speed up query after db server reboot

2009-09-03 Thread Tom Lane
Magnus Hagander writes: > On Wed, Sep 2, 2009 at 00:01, Wei Yan wrote: >> Looks like after postgres db server reboot, first query is very slow >> (10+mins). After the system cache built, query is pretty fast. >> Now the question is how to speed up the first query slow issue? > Schedule a run of a

Re: [PERFORM] Help: how to speed up query after db server reboot

2009-09-03 Thread Magnus Hagander
On Wed, Sep 2, 2009 at 00:01, Wei Yan wrote: > Hi: > > Looks like after postgres db server reboot, first query is very slow > (10+mins). After the system cache built, query is pretty fast. > Now the question is how to speed up the first query slow issue? > > Any pointers? Schedule a run of a coupl

[PERFORM] Help: how to speed up query after db server reboot

2009-09-02 Thread Wei Yan
Hi: Looks like after postgres db server reboot, first query is very slow (10+mins). After the system cache built, query is pretty fast. Now the question is how to speed up the first query slow issue? Any pointers? Thanks wei

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane wrote: > Bryan Murphy writes: >> What I did was change seq_page_cost back to 1.0 and then changed >> random_page_cost to 0.5 > > [ squint... ]  It makes no physical sense for random_page_cost to be > less than seq_page_cost.  Please set them the same. >

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Tom Lane
Bryan Murphy writes: > What I did was change seq_page_cost back to 1.0 and then changed > random_page_cost to 0.5 [ squint... ] It makes no physical sense for random_page_cost to be less than seq_page_cost. Please set them the same. regards, tom lane -- Sent via pgsql

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 9:15 PM, Tom Lane wrote: > I think what you should be doing is messing with the cost parameters > ... and not in the direction you tried before.  I gather from >        effective_cache_size = 12GB > that you have plenty of RAM on this machine.  If the tables involved > are

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Tom Lane
Bryan Murphy writes: > I tried that already, but I decided to try again in case I messed up > something last time. Here's what I ran. As you can see, it still > chooses to do a sequential scan. Am I changing the stats for those > columns correctly? I think what you should be doing is messing w

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 4:55 PM, Josh Berkus wrote: > Oh, I see.  It thinks that it'll need to pull 260,000 redundant rows in > order to get 1800 unique ones.  Only it's wrong; you're only pulling about > 4000. > > Try increasing some stats still further: itemexperiencelog.visitorid and > visitors

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Josh Berkus
Bryan, One thing to keep in mind, due to a lapse in our judgement at the time, this itemexperiencelog table serves as both a current state table, and a log table. Therefore, it potentially has multiple redundant entries, but we typically only look at the most recent entry to figure out the stat

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 8:40 AM, Robert Haas wrote: > On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote: >> Looking through our configuration one more time, I see that at some >> point I set random_page_cost to 2.0, but I don't see any other changes >> to query planner settings from their defa

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Robert Haas
On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote: > Looking through our configuration one more time, I see that at some > point I set random_page_cost to 2.0, but I don't see any other changes > to query planner settings from their default values. You don't by any chance have enable_ set to "

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Bryan Murphy
On Tue, Mar 24, 2009 at 9:30 PM, Josh Berkus wrote: > For some reason, your first post didn't make it to the list, which is why > nobody responded. Weird... I've been having problems with gmail and google reader all week. >>> I've got a query on our production system that isn't choosing a good >

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Bryan Murphy
On Tue, Mar 24, 2009 at 10:04 PM, marcin mank wrote: > There is one thing I don`t understand: > >              ->  Nested Loop  (cost=0.00..180564.28 rows=1806 > width=37) (actual time=0.192..60.214 rows=3174 loops=1) >                    ->  Index Scan using visitors_userid_index2 on > visitors v

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread marcin mank
There is one thing I don`t understand: -> Nested Loop (cost=0.00..180564.28 rows=1806 width=37) (actual time=0.192..60.214 rows=3174 loops=1) -> Index Scan using visitors_userid_index2 on visitors v (cost=0.00..2580.97 rows=1300 width=33) (actual time=0.052..2

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Josh Berkus
Brian, I hate to nag, but could anybody help me with this? We have a few related queries that are causing noticeable service delays in our production system. I've tried a number of different things, but I'm running out of ideas and don't know what to do next. For some reason, your first post

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Bryan Murphy
I hate to nag, but could anybody help me with this? We have a few related queries that are causing noticeable service delays in our production system. I've tried a number of different things, but I'm running out of ideas and don't know what to do next. Thanks, Bryan On Mon, Mar 23, 2009 at 2:03

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Bryan Murphy
I hate to nag, but could anybody help me with this? We have a few related queries that are causing noticeable service delays in our production system. I've tried a number of different things, but I'm running out of ideas and don't know what to do next. Thanks, Bryan On Mon, Mar 23, 2009 at 2:03

  1   2   3   4   5   >