On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman wrote:
> At present for reporting I use following types of query:
> select crm.*, crm_cnt.cnt
> from crm,
> (select count(*) as cnt from crm) crm_cnt;
> Here count query is used to find the total number of records.
> Same FROM clause is copied in both the
On mysql the same query only takes milliseconds not seconds. That's a
big difference.
-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
Sent: Tuesday, October 19, 2010 1:59 PM
To: Ozer, Pam; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow Query- Si
On Tue, Oct 19, 2010 at 8:21 PM, Ozer, Pam wrote:
> I have the following query running on 8.4, which takes 3516 ms. It is very
> straight forward. It brings back 116412 records. The explain only takes
> 1348ms
> "Sort (cost=104491.48..105656.24 rows=116476 width=41) (actual
> time=1288.413..1
"Ozer, Pam" wrote:
> I have the following query running on 8.4, which takes 3516 ms.
> It is very straight forward. It brings back 116412 records. The
> explain only takes 1348ms
The EXPLAIN ANALYZE doesn't have to return 116412 rows to the
client. It doesn't seem too out of line to me tha
I have the following query running on 8.4, which takes 3516 ms. It is
very straight forward. It brings back 116412 records. The explain only
takes 1348ms
select VehicleUsed.VehicleUsedId as VehicleUsedId ,
VehicleUsed.VehicleUsedDisplayPriority as VehicleUsedDisplayPriority ,
VehicleUsed
On Tue, Oct 19, 2010 at 12:28 PM, Greg Smith wrote:
> Jon Nelson wrote:
>>
>> That's a little harsh (it's not untrue, though).
>>
>
> Welcome to pgsql-performance! You can get a right answer, or a nice answer,
> but given the personalities involved it's hard to get both at the same time.
> With
Jon Nelson wrote:
That's a little harsh (it's not untrue, though).
Welcome to pgsql-performance! You can get a right answer, or a nice
answer, but given the personalities involved it's hard to get both at
the same time. With this crowd, you need to be careful stating
something you were
On Tue, Oct 19, 2010 at 9:36 AM, Tom Lane wrote:
> Jon Nelson writes:
>> This is another situation where using pread would have saved a lot of
>> time and sped things up a bit, but failing that, keeping track of the
>> file position ourselves and only lseek'ing when necessary would also
>> help.
Mladen Gogala wrote:
> how is the query cost calculated? What are
> the units? I/O requests? CPU cycles? Monopoly money?
http://www.postgresql.org/docs/current/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-per
Jon Nelson writes:
> This is another situation where using pread would have saved a lot of
> time and sped things up a bit, but failing that, keeping track of the
> file position ourselves and only lseek'ing when necessary would also
> help.
No, it wouldn't; you don't have the slightest idea what
Eric Comeau writes:
> Okay - I'm starting to see other stmts other than just commits taking
> longer than 5 secs sometimes as well now - stress test has been running
> for 3 days now...some commits 17 and 15 secs ouch...
If it's not just commits then some of the stranger theories go away.
I thi
Scott Carey wrote:
If the cost to hash is 1200493, and it needs to probe the hash 20241 times, why would the total cost be 631471410? The cost to probe can't be that big! A cost of 500 to probe and join?
Why favor hashing the large table and probing with the small values rather than the other
On Tue, Oct 19, 2010 at 1:18 AM, AI Rumman wrote:
> Not actualy. I used pagination with limit clause in details query and I
> need the total number of records in the detail query.
>
>
Can you use a cursor? Roughly...
BEGIN;
DECLARE x CURSOR FOR SELECT * FROM crm;
MOVE FORWARD ALL IN x;
MOVE BAC
On Tue, Oct 19, 2010 at 8:25 AM, Merlin Moncure wrote:
> On Tue, Oct 19, 2010 at 9:10 AM, Jon Nelson wrote:
>> No replies?
>>
>> This is another situation where using pread would have saved a lot of
>> time and sped things up a bit, but failing that, keeping track of the
>> file position ourselve
On Tue, Oct 19, 2010 at 9:10 AM, Jon Nelson wrote:
> No replies?
>
> This is another situation where using pread would have saved a lot of
> time and sped things up a bit, but failing that, keeping track of the
> file position ourselves and only lseek'ing when necessary would also
> help. Postgres
No replies?
This is another situation where using pread would have saved a lot of
time and sped things up a bit, but failing that, keeping track of the
file position ourselves and only lseek'ing when necessary would also
help. Postgresql was spending 37% of it's time in redundant lseek!
--
Jon
On 10-10-18 11:02 AM, Tom Lane wrote:
Mladen Gogala writes:
Tom Lane wrote:
My guess would be overstressed disk subsystem. A COMMIT doesn't require
much except fsync'ing the commit WAL record down to disk ...
Doesn't the "commit" statement also release all the locks held by the
transaction
On Oct 18, 2010, at 8:43 PM, Tom Lane wrote:
> Scott Carey writes:
>> I consistently see HashJoin plans that hash the large table, and scan
>> the small table.
>
> Could we see a self-contained test case? And what cost parameters are
> you using, especially work_mem?
I'll see if I can make a
Not actualy. I used pagination with limit clause in details query and I need
the total number of records in the detail query.
On Mon, Oct 18, 2010 at 9:52 PM, Josh Kupershmidt wrote:
> On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman wrote:
> > At present for reporting I use following types of query:
19 matches
Mail list logo