On 10/07/2014 04:41 PM, Sergey Konoplev wrote:
On Thu, Oct 2, 2014 at 4:00 PM, George Neuner wrote:
--- code
CREATE OR REPLACE FUNCTION gen_random()
RETURNS double precision AS
$BODY$
DECLARE
num float8 := 0;
den float8 := 281474976710655; -- 0xF
On Thu, Oct 2, 2014 at 4:00 PM, George Neuner wrote:
> --- code
> CREATE OR REPLACE FUNCTION gen_random()
> RETURNS double precision AS
> $BODY$
> DECLARE
>num float8 := 0;
>den float8 := 281474976710655; -- 0x
>bytes bytea[6];
> BEGIN
>
Hi,
I am using Postgresql 9.3.5 on Ubuntu and I have a sudden, unexplained
failure in a function that has been working for a long time.
--- code
CREATE OR REPLACE FUNCTION gen_random()
RETURNS double precision AS
$BODY$
DECLARE
num float8 := 0;
den floa
To whom it may concern,
My question and problem is posted on below site:
http://stackoverflow.com/questions/23147724/postgresql-9-3-on-ubuntu-server-12-04-v-s-ms-sql-server-2008-r2-on-windows-7-ul
Would you please help me to solve my problem.
Thank you for your help.
Alex,
regard
On 03-03-14 19:24, Eli Naeher wrote:
Hello,
I have two versions of essentially the same query; one using nested
joins, the other using subselects. The version using the subselect is
roughly an order of magnitude faster (~70ms on my box and data vs
~900ms for the nested joins). Of course the o
Hello,
I have two versions of essentially the same query; one using nested joins,
the other using subselects. The version using the subselect is roughly an
order of magnitude faster (~70ms on my box and data vs ~900ms for the
nested joins). Of course the obvious answer here is just to use the fast
Hi Claudio,
Thanks for the help!
Damon
On Mon, Mar 3, 2014 at 8:20 PM, Claudio Freire wrote:
> On Mon, Mar 3, 2014 at 10:12 PM, Damon Snyder
> wrote:
> >
> >> Um... I think your problem is a misuse of CTE. Your CTE is building an
> > intermediate of several thousands of rows only to select a d
On Mon, Mar 3, 2014 at 10:12 PM, Damon Snyder wrote:
>
>> Um... I think your problem is a misuse of CTE. Your CTE is building an
> intermediate of several thousands of rows only to select a dozen
> afterwards. You may want to consider a view or subquery, though I'm
> not sure pg will be able to op
Hi Claudio,
See my comments inline below.
> Um... I think your problem is a misuse of CTE. Your CTE is building an
intermediate of several thousands of rows only to select a dozen
afterwards. You may want to consider a view or subquery, though I'm
not sure pg will be able to optimize much given yo
Um... I think your problem is a misuse of CTE. Your CTE is building an
intermediate of several thousands of rows only to select a dozen
afterwards. You may want to consider a view or subquery, though I'm
not sure pg will be able to optimize much given your use of window
functions, which forces a ma
Hi Claudio,
Thanks for responding. Here is the explain (http://explain.depesz.com/s/W3W)
for the ordering by meta container starting on line 192 (
https://gist.github.com/drsnyder/9277054#file-object-ordering-setup-sql-L192
).
Here is the explain (http://explain.depesz.com/s/d1O) for the ordering
On Fri, Feb 28, 2014 at 5:01 PM, Damon Snyder wrote:
> The primary query that I'm trying to optimize executes in about 1600ms on my
> laptop and about 800ms on production-like hardware (more for the score
> version). My target is to get the data fetch down below 100ms if possible.
Could you post
Hi Everyone,
We have a data set and access pattern that is causing us some performance
issues. The data set is hierarchical with about 2 million rows at the
lowest level (object), followed by 500k at the next level (container) and
approximately 10 at the highest level (category).
The way the data
Hi,
I have a long query that returns an extremely large result set. In my
application, I would like to report the results as they come in, so I am
creating a cursor and fetching 1000 rows at a time. After I declare the cursor
(declare C cursor for), I call "fetch 1000 from C" over and over. Usu
Maria L. Wilson wrote:
> Can someone shed some light on the following query.
> any help would certainly be appreciated!
The query text and EXPLAIN ANALYZE output are a good start, but a lot
of other information is needed to really understand the issue.
http://wiki.postgresql.org/wiki/SlowQue
Can someone shed some light on the following query.
any help would certainly be appreciated!
thanks -
*
Maria Wilson
Nasa/Langley Research Center
Hampton, Virginia
m.l.wil...@nasa.gov
*
explain analyze
select a.ID, a.provider, a.hostname, a.username, a.eventTimeStamp,
a.AIPGUID, a.
@Victor,
Spasibo for the information, seems valid to me.
Regards,
Willem Leenen
> Date: Tue, 6 Nov 2012 16:52:01 +0200
> Subject: Re: [PERFORM] help with too slow query
> From: vyego...@gmail.com
> To: willem_lee...@hotmail.com
> CC: p.jime...@ismsolar.com; pg
2012/11/6 Willem Leenen :
> @Victor,
>
> Is the reason of the wrong cardinality estimations of the join indeed due to
> wrong statistics? I thought that the full table scan was due to the index on
> the timefield couldn't be used with this predicate:
>
> time_stamp > date_trunc('month', current_dat
, 6 Nov 2012 14:17:07 +0200
> Subject: Re: [PERFORM] help with too slow query
> From: vyego...@gmail.com
> To: p.jime...@ismsolar.com
> CC: pgsql-performance@postgresql.org
>
> 2012/11/6 Pedro Jiménez Pérez
> > Ok, here we go:
> >
> > I'm using postgresql vers
2012/11/6 Pedro Jiménez Pérez
> Ok, here we go:
>
> I'm using postgresql version 8.0
>
> Here is my query that is too slow: http://explain.depesz.com/s/GbQ
Well, I would start with a note, that 8.0 is not supported anymore:
http://www.postgresql.org/support/versioning/
Please, consider upgradin
Title: Documento sin título
Ok, here we go:
I'm using postgresql version 8.0
Here is my query that is too slow: http://explain.depesz.com/s/GbQ
***
EXPLAIN analyze
select round(CAST(sum(var_value) AS nume
Title: Documento sin título
Hello,
I have this table definition:
CREATE TABLE ism_floatvalues
(
id_signal bigint NOT NULL, -- Indica la señal a la que pertenece
este valor. Clave foránea que referencia al campo id_signal de la
tabla ism_signal.
time_stam
Pedro Jiménez Pérez wrote:
> Sent: Friday, November 02, 2012 1:14 PM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] help with too slow query
>
> Hello,
> I have this table definition:
> CREATE TABLE ism_floatvalues
> (
> id_signal bigint NOT NULL,
2012/11/2 Pedro Jiménez Pérez
> I have this table definition:
>
1) Could you kindly include also information bout ism_signal and
ism_installation
tables?
2) Please, follow this guide to provide more input:
http://wiki.postgresql.org/wiki/Slow_Query_Questions
--
Victor Y. Yegorov
On Thu, Sep 6, 2012 at 6:50 PM, charles_xie wrote:
> Hi,
>Thanks for your advice.i know the basic monitoring skill,because the
> postgresql database is used for the factory production , so I hope they can
> run normal and exert more perfect performance. so i need to be considered
> from the p
Hi,
Thanks for your advice.i know the basic monitoring skill,because the
postgresql database is used for the factory production , so I hope they can
run normal and exert more perfect performance. so i need to be considered
from the point of view ,eg : threading ,locks and so on.
Daniel Farina
Also probably some good info to be mined out of postbix.
http://www.zabbix.com/wiki/howto/monitor/db/postbix/monitor_postgres_with_zabbix
On Thu, Sep 6, 2012 at 12:44 PM, Josh Berkus wrote:
> On 9/4/12 12:12 AM, charles_xie wrote:
> > Hi all,
> > i have 5 servers that have been installi
On 9/4/12 12:12 AM, charles_xie wrote:
> Hi all,
> i have 5 servers that have been installing postgresql .In order to
> know the postgresql working status and monitor them ,moreover i don't want
> to use the monitor tools .I want to use the SQL commands to monitoring
> postgresql system
On Tue, Sep 4, 2012 at 12:12 AM, charles_xie wrote:
> Hi all,
> i have 5 servers that have been installing postgresql .In order to
> know the postgresql working status and monitor them ,moreover i don't want
> to use the monitor tools .I want to use the SQL commands to monitoring
> pos
Hi all,
i have 5 servers that have been installing postgresql .In order to
know the postgresql working status and monitor them ,moreover i don't want
to use the monitor tools .I want to use the SQL commands to monitoring
postgresql system . please suggest any SQL COMMANDS to work succes
Scott Marlowe writes:
> On Tue, Apr 5, 2011 at 1:25 PM, Maria L. Wilson
> wrote:
> This bit:
>> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR
>> INVS
> has both an explicit and an implicit join. This can constrain join
> re-ordering in the planner. Can you change it
On Tue, May 10, 2011 at 2:20 PM, Maria L. Wilson
wrote:
> haven't tested a composite index
>
> invsensor is 2,003,980 rows and 219MB
> granver is 5,138,730 rows and 556MB
> the machine has 32G memory
> seq_page_cost, random_page_costs & effective_cache_size are set to the
> defaults (1,4, and 128M
On Tue, Apr 5, 2011 at 1:25 PM, Maria L. Wilson
wrote:
This bit:
> left outer join INVENTORY IV on GV.GRANULE_ID = IV.GRANULE_ID, INVSENSOR
> INVS
has both an explicit and an implicit join. This can constrain join
re-ordering in the planner. Can you change it to explicit joins only
and see if
haven't tested a composite index
invsensor is 2,003,980 rows and 219MB
granver is 5,138,730 rows and 556MB
the machine has 32G memory
seq_page_cost, random_page_costs & effective_cache_size are set to the
defaults (1,4, and 128MB) - looks like they could be bumped up.
Got any recommendations?
thanks for taking a look at this and it's never too late!!
I've tried bumping up work_mem and did not see any improvements -
All the indexes do exist that you asked see below
Any other ideas?
CREATE INDEX invsnsr_idx1
ON invsensor
USING btree
(granule_id);
CREATE INDEX invsns
[ woops, accidentally replied off-list, trying again ]
On Tue, May 10, 2011 at 1:47 PM, Maria L. Wilson
wrote:
> thanks for taking a look at this and it's never too late!!
>
> I've tried bumping up work_mem and did not see any improvements -
> All the indexes do exist that you asked see
On Tue, Apr 5, 2011 at 3:25 PM, Maria L. Wilson
wrote:
> Would really appreciate someone taking a look at the query below Thanks
> in advance!
>
>
> this is on a linux box...
> Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37
> EST 2009 x86_64 x86_64 x86_64 GNU/Linux
yep - we use analyze and check the output. It's version 8.4.5 so no fsm
issues.
thanks, Maria
On 4/6/11 11:33 AM, Kevin Grittner wrote:
"Maria L. Wilson" wrote:
Autovacuum is not running - but regular vacuums are being done
twice daily.
Is the ANALYZE keyword used on those VACUUM runs?
Dne 6.4.2011 17:33, Kevin Grittner napsal(a):
> "Maria L. Wilson" wrote:
>
>> Autovacuum is not running - but regular vacuums are being done
>> twice daily.
>
> Is the ANALYZE keyword used on those VACUUM runs? What version of
> PostgreSQL is this. If it's enough to need fsm settings, do you
"Maria L. Wilson" wrote:
> Autovacuum is not running - but regular vacuums are being done
> twice daily.
Is the ANALYZE keyword used on those VACUUM runs? What version of
PostgreSQL is this. If it's enough to need fsm settings, do you run
with the VERBOSE option and check the end of the outp
thanks for the reply, Tomas. I'll test bumping up work_mem and see how
that helps.
thanks again, Maria Wilson
On 4/6/11 9:16 AM, t...@fuzzy.cz wrote:
some additional info.
the table inventory is about 4481 MB and also has postgis types.
the table gran_ver is about 523 MB
the table IN
Autovacuum is not running - but regular vacuums are being done twice daily.
indexes on inventory:
CREATE INDEX inven_idx1
ON inventory
USING btree
(inv_id);
CREATE UNIQUE INDEX inven_idx2
ON inventory
USING btree
(granule_id);
indexes on gran_ver:
CREATE UNIQUE INDEX granver_idx1
> some additional info.
> the table inventory is about 4481 MB and also has postgis types.
> the table gran_ver is about 523 MB
> the table INVSENSOR is about 217 MB
>
> the server itself has 32G RAM with the following set in the postgres conf
> shared_buffers = 3GB
> work_mem = 64MB
> maintena
On 5 April 2011 21:25, Maria L. Wilson wrote:
> Would really appreciate someone taking a look at the query below
> Thanks in advance!
>
>
> this is on a linux box...
> Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9 03:27:37
> EST 2009 x86_64 x86_64 x86_64 GNU/Linux
>
> expl
Would really appreciate someone taking a look at the query below
Thanks in advance!
this is on a linux box...
Linux dsrvr201.larc.nasa.gov 2.6.18-164.9.1.el5 #1 SMP Wed Dec 9
03:27:37 EST 2009 x86_64 x86_64 x86_64 GNU/Linux
explain analyze
select MIN(IV.STRTDATE), MAX(IV.ENDDATE)
from G
Hi,
I have found the bug in my code that made the update to the same row in the
table instead of two different row. Now I have all cores up and running
100%.
Thank you for all your help.
On Fri, Mar 18, 2011 at 3:21 PM, Kevin Grittner wrote:
> Red Maple wrote:
>
> > Here is my function. If I
Red Maple wrote:
> Here is my function. If I comment out the update then it would run
> all the cores, if not then only one core will run
> CREATE OR REPLACE FUNCTION
> [...]
> select sysuptime
> into this_sysuptime
> from ap_sysuptime
> where ap_id = this_
Hi,
Here is my function. If I comment out the update then it would run all the
cores, if not then only one core will run
CREATE OR REPLACE FUNCTION my_update_device(this_mac text, number_of_devices
integer, this_sysuptime integer)
RETURNS integer AS
$BODY$
DECLARE
fake_mac m
[rearranged - please don't top-post]
[also, bringing this back to the list - please keep the list copied]
Red Maple wrote:
> Kevin Grittner wrote:
>> It should be parallel by default. Are you taking out any
>> explicit locks?
> my clients use psql to remotely run an update function on the
> Thanks , it works now .. :-)
>
> Here is the output :
>
> pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like
> '%2010%' and
> pdc_uima-# content_language='en' and content is not null and
> isprocessable = 1 and
> pdc_uima-# to_tsvector('english',content) @@
> to_tsquery('eng
>From: pgsql-performance-ow...@postgresql.org
>[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Red Maple
>Sent: Friday, March 18, 2011 9:05 AM
>To: pgsql-performance@postgresql.org
>Subject: [PERFORM] Help: massive parallel update to the same table
>
>Hi all
Red Maple wrote:
> Our system has a postgres database that has a table for statistic
> which is updated every hour by about 10K clients. Each client only
> make update to its own row in the table. So far I am only seeing
> one core out of eight cores on my server being active which tells
> me th
Hi all,
Our system has a postgres database that has a table for statistic which is
updated every hour by about 10K clients. Each client only make update to its
own row in the table. So far I am only seeing one core out of eight cores on
my server being active which tells me that the update is bein
On 03/18/2011 12:17 AM, Adarsh Sharma wrote:
Thanks , it works now ..:-)
Here is the output :
pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like
'%2010%' and
pdc_uima-# content_language='en' and content is not null and isprocessable = 1
and
pdc_uima-# to_tsvector('engli
Thanks , it works now .. :-)
Here is the output :
pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like
'%2010%' and
pdc_uima-# content_language='en' and content is not null and
isprocessable = 1 and
pdc_uima-# to_tsvector('english',content) @@
to_tsquery('english','Mujahid
> *Modified Query :-
>
> *SELECT count(*) from page_content
> WHERE publishing_date like '%2010%' and content_language='en' and
> content is not null and isprocessable = 1 and
> to_tsvectOR('english',content) @@ to_tsquery('english','Mujahid' ||
> 'jihad' || 'Militant' || 'fedayeen' || 'insurgent'
Thanks, I understand it know :-
But My one doubt which isn't clear :
*Original Query :-*
select count(*) from page_content where (content like '%Militant%'
OR content like '%jihad%' OR content like '%Mujahid%' OR
content like '%fedayeen%' OR content like '%insurgent%' OR content
like '%t
Thanks Marshall, would I need to change the data type of *content
*column to tsvector and create a Gist Index on it.
Best Regards,
Adarsh
Kenneth Marshall wrote:
On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:
Dear all,
I am facing a problem while creating the index to m
On 03/16/2011 05:13 AM, Adarsh Sharma wrote:
Dear all,
I am facing a problem while creating the index to make the below query run
faster. My table size is near about 1065 MB and 428467 rows.
explain analyze select count(*) from page_content where publishing_date like
'%2010%' and content_lang
On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:
> Dear all,
>
> I am facing a problem while creating the index to make the below query run
> faster. My table size is near about 1065 MB and 428467 rows.
>
> explain analyze select count(*) from page_content where publishing_date
Dear all,
I am facing a problem while creating the index to make the below query
run faster. My table size is near about 1065 MB and 428467 rows.
explain analyze select count(*) from page_content where
publishing_date like '%2010%' and content_language='en' and content is
not null and i
Pierre-
I agree with your observation of float <-> text conversion costs, but in
this case Dan is talking about storing the raw float data (ie: 4 bytes
per float) in a bytea array so there is only the conversion from java
float[n] to java byte[4*n] which is not nearly as costly as float <->
t
If the data are stored as a byte array but retrieve into a ResultSet,
the unpacking time goes up by an order of magnitude and the
observed total throughput is 25 MB/s. If the data are stored in a
Postgres float array and unpacked into a byte stream, the
observed throughput is 20 MB/s.
fl
Hello Daniel,
We have the same scenario for the native Java arrays, so we are storing bytea
and doing conversion at the client side, but for the server side SQL, plJava
comes very handy:
No sure how you want to create stored procedures to convert internally but this
is how we do this:
One has
Hi,
My name is Dan and I'm a co-worker of Nick Matheson who initially submitted this question (because the mail group had me blacklisted
for awhile for some reason).
Thank you for all of the suggestions. We were able to improve out bulk read performance from 3 MB/s to 60 MB/s (assuming the da
Hey all-
Glad to know you are still interested... ;)
Didn't mean to leave you hanging, the holiday and all have put some
bumps in the road.
Dan my co-worker might be able to post some more detailed information
here, but here is a brief summary of what I am aware of:
1. We have not tested a
BTW, have you tried prepared statements? bytea is most likely faster (in part)
due to less parsing in the backend. Prepared statements would eliminate that
parsing step.
On Dec 14, 2010, at 10:07 AM, Nick Matheson wrote:
> Hey all-
>
> Glad to know you are still interested... ;)
>
> Didn't me
On 12/14/2010 9:41 AM, Jim Nasby wrote:
On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:
Is this the same thing Nick is working on? How'd he get along?
http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov
So it is. The one I replied to stood out because no one had replied to it
On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:
> Is this the same thing Nick is working on? How'd he get along?
>
> http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov
So it is. The one I replied to stood out because no one had replied to it; I
didn't see the earlier email.
--
On 11/1/2010 9:15 AM, Dan Schaffer wrote:
Hello
We have an application that needs to do bulk reads of ENTIRE Postgres
tables very quickly (i.e. select * from table). We have observed that
such sequential scans run two orders of magnitude slower than observed
raw disk reads (5 MB/s versus 100 MB/
On Nov 1, 2010, at 9:15 AM, Dan Schaffer wrote:
> We have an application that needs to do bulk reads of ENTIRE Postgres tables
> very quickly (i.e. select * from table). We have observed that such
> sequential scans run two orders of magnitude slower than observed raw disk
> reads (5 MB/s versu
Hello
We have an application that needs to do bulk reads of ENTIRE Postgres tables very quickly (i.e. select * from table). We have
observed that such sequential scans run two orders of magnitude slower than observed raw disk reads (5 MB/s versus 100 MB/s). Part
of this is due to the storage
Eric Comeau writes:
> Okay - I'm starting to see other stmts other than just commits taking
> longer than 5 secs sometimes as well now - stress test has been running
> for 3 days now...some commits 17 and 15 secs ouch...
If it's not just commits then some of the stranger theories go away.
I thi
On 10-10-18 11:02 AM, Tom Lane wrote:
Mladen Gogala writes:
Tom Lane wrote:
My guess would be overstressed disk subsystem. A COMMIT doesn't require
much except fsync'ing the commit WAL record down to disk ...
Doesn't the "commit" statement also release all the locks held by the
transaction
Mladen Gogala writes:
> Tom Lane wrote:
>> My guess would be overstressed disk subsystem. A COMMIT doesn't require
>> much except fsync'ing the commit WAL record down to disk ...
> Doesn't the "commit" statement also release all the locks held by the
> transaction?
Yeah, and there's a nontriv
Tom Lane wrote:
My guess would be overstressed disk subsystem. A COMMIT doesn't require
much except fsync'ing the commit WAL record down to disk ...
Doesn't the "commit" statement also release all the locks held by the
transaction?
--
Mladen Gogala
Sr. Oracle DBA
1500 Broadway
New York, N
Eric Comeau writes:
> 2010-10-16 05:55:52 EDT [6334]: [1-1] LOG: duration: 5572.517 ms
> statement: EXECUTE [PREPARE: COMMIT]
> 2010-10-16 06:06:24 EDT [26856]: [1-1] LOG: duration: 5617.866 ms
> statement: EXECUTE [PREPARE: COMMIT]
> 2010-10-16 06:06:24 EDT [20740]: [13-1] LOG: duratio
We currently have
log_min_duration_statement = 5000
and are seeing statements like the following logged
2010-10-16 05:55:52 EDT [6334]: [1-1] LOG: duration: 5572.517 ms
statement: EXECUTE [PREPARE: COMMIT]
2010-10-16 06:06:24 EDT [26856]: [1-1] LOG: duration: 5617.866 ms
statement: EXE
Brandon Casci writes:
> I have an application hosted on Heroku. They use postgres. It's more or less
> abstracted away, but I can get some performance data from New Relic. For the
> most part, performance is ok, but now and then some queries take a few
> seconds, and spike up to 15 or even 16 seco
Hello
I have an application hosted on Heroku. They use postgres. It's more or less
abstracted away, but I can get some performance data from New Relic. For the
most part, performance is ok, but now and then some queries take a few
seconds, and spike up to 15 or even 16 seconds! Ouch!
This is the
Hello
I have an application hosted on Heroku. They use postgres. It's more or less
abstracted away, but I can get some performance data from New Relic. For the
most part, performance is ok, but now and then some queries take a few
seconds, and spike up to 15 or even 16 seconds! Ouch!
This is the
Hey folks
I am trying to do a full table scan on a large table from Java, using a
straightforward "select * from foo". I've run into these problems:
1. By default, the PG JDBC driver attempts to suck the entire result set
into RAM, resulting in *java.lang.OutOfMemoryError* ... this is not cool, i
Thanks for the response, Matthew.
> On Thu, 10 Dec 2009, Mark Stosberg wrote:
> > What I'm noticing is that the while the FreeBSD server has 4 Gigs of
> > memory, there are rarely every more than 2 in use-- the memory use
> > graphs as being rather constant. My goal is to make good use of those 2
Magnus Hagander writes:
> On Wed, Sep 2, 2009 at 00:01, Wei Yan wrote:
>> Looks like after postgres db server reboot, first query is very slow
>> (10+mins). After the system cache built, query is pretty fast.
>> Now the question is how to speed up the first query slow issue?
> Schedule a run of a
On Wed, Sep 2, 2009 at 00:01, Wei Yan wrote:
> Hi:
>
> Looks like after postgres db server reboot, first query is very slow
> (10+mins). After the system cache built, query is pretty fast.
> Now the question is how to speed up the first query slow issue?
>
> Any pointers?
Schedule a run of a coupl
Hi:
Looks like after postgres db server reboot, first query is very slow
(10+mins). After the system cache built, query is pretty fast.
Now the question is how to speed up the first query slow issue?
Any pointers?
Thanks
wei
On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane wrote:
> Bryan Murphy writes:
>> What I did was change seq_page_cost back to 1.0 and then changed
>> random_page_cost to 0.5
>
> [ squint... ] It makes no physical sense for random_page_cost to be
> less than seq_page_cost. Please set them the same.
>
Bryan Murphy writes:
> What I did was change seq_page_cost back to 1.0 and then changed
> random_page_cost to 0.5
[ squint... ] It makes no physical sense for random_page_cost to be
less than seq_page_cost. Please set them the same.
regards, tom lane
--
Sent via pgsql
On Wed, Mar 25, 2009 at 9:15 PM, Tom Lane wrote:
> I think what you should be doing is messing with the cost parameters
> ... and not in the direction you tried before. I gather from
> effective_cache_size = 12GB
> that you have plenty of RAM on this machine. If the tables involved
> are
Bryan Murphy writes:
> I tried that already, but I decided to try again in case I messed up
> something last time. Here's what I ran. As you can see, it still
> chooses to do a sequential scan. Am I changing the stats for those
> columns correctly?
I think what you should be doing is messing w
On Wed, Mar 25, 2009 at 4:55 PM, Josh Berkus wrote:
> Oh, I see. It thinks that it'll need to pull 260,000 redundant rows in
> order to get 1800 unique ones. Only it's wrong; you're only pulling about
> 4000.
>
> Try increasing some stats still further: itemexperiencelog.visitorid and
> visitors
Bryan,
One thing to keep in mind, due to a lapse in our judgement at the
time, this itemexperiencelog table serves as both a current state
table, and a log table. Therefore, it potentially has multiple
redundant entries, but we typically only look at the most recent entry
to figure out the stat
On Wed, Mar 25, 2009 at 8:40 AM, Robert Haas wrote:
> On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote:
>> Looking through our configuration one more time, I see that at some
>> point I set random_page_cost to 2.0, but I don't see any other changes
>> to query planner settings from their defa
On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote:
> Looking through our configuration one more time, I see that at some
> point I set random_page_cost to 2.0, but I don't see any other changes
> to query planner settings from their default values.
You don't by any chance have enable_ set to "
On Tue, Mar 24, 2009 at 9:30 PM, Josh Berkus wrote:
> For some reason, your first post didn't make it to the list, which is why
> nobody responded.
Weird... I've been having problems with gmail and google reader all week.
>>> I've got a query on our production system that isn't choosing a good
>
On Tue, Mar 24, 2009 at 10:04 PM, marcin mank wrote:
> There is one thing I don`t understand:
>
> -> Nested Loop (cost=0.00..180564.28 rows=1806
> width=37) (actual time=0.192..60.214 rows=3174 loops=1)
> -> Index Scan using visitors_userid_index2 on
> visitors v
There is one thing I don`t understand:
-> Nested Loop (cost=0.00..180564.28 rows=1806
width=37) (actual time=0.192..60.214 rows=3174 loops=1)
-> Index Scan using visitors_userid_index2 on
visitors v (cost=0.00..2580.97 rows=1300 width=33) (actual
time=0.052..2
Brian,
I hate to nag, but could anybody help me with this? We have a few
related queries that are causing noticeable service delays in our
production system. I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.
For some reason, your first post
I hate to nag, but could anybody help me with this? We have a few
related queries that are causing noticeable service delays in our
production system. I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.
Thanks,
Bryan
On Mon, Mar 23, 2009 at 2:03
I hate to nag, but could anybody help me with this? We have a few
related queries that are causing noticeable service delays in our
production system. I've tried a number of different things, but I'm
running out of ideas and don't know what to do next.
Thanks,
Bryan
On Mon, Mar 23, 2009 at 2:03
1 - 100 of 473 matches
Mail list logo