Dimitri wrote:
Now, as you see from your explanation, the Part #2 is the most
dominant - so why instead to blame this query not to implement a QUERY
PLANNER CACHE??? - in way if any *similar* query is recognized by
parser we simply *reuse* the same plan?..
At least in JDBC, there's several open
On Mon, May 11, 2009 at 8:15 PM, Greg Smith wrote:
> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm is a long
> discussion of just this topic, if you saw a serious change by adjusting
> checkpoint_timeout than further experimentation in this area is likely to
> help you out.
I
On Mon, 11 May 2009, Dimitri wrote:
I've tried to reduce checkpoint timeout from 5min to 30sec - it helped,
throughput is more stable now, but instead of big waves I have now short
waves anyway..
Tuning for very tiny checkpoints all of the time is one approach here.
The other is to push up c
Cory Coager writes:
> Even better yet, if I turn off enable_nestloop the query runs in
> 3499.970 ms:
The reason it prefers a nestloop inappropriately is a mistaken estimate
that some plan node is only going to yield a very small number of rows
(like one or two --- there's not a hard cutoff, but
Dimitri escribió:
> Hi Aidan,
>
> thanks a lot for this detailed summary!
>
> So, why I don't use prepare here: let's say I'm testing the worst
> stress case :-) Imagine you have thousands of such kind of queries -
> you cannot prepare all of them! :-)
Thousands? Surely there'll be a dozen or
Hi Aidan,
thanks a lot for this detailed summary!
So, why I don't use prepare here: let's say I'm testing the worst
stress case :-) Imagine you have thousands of such kind of queries -
you cannot prepare all of them! :-) or you'll maybe prepare it once,
but as I showed previously in this thread
Hi Simon,
it's too early yet to speak about MySQL scalability... :-)
it's only since few months there is *no more* regression on MySQL
performance while moving from 8 to 16 cores. But looking how quickly
it's progressing now things may change very quickly :-)
For the moment on my tests it gives:
I'm running version 8.1.11 on SLES 10 SP2. I'm trying to improve this
query and unfortunately I cannot change the application. For some
reason the planner is making a bad decision sometimes after an analyze
of table objectcustomfieldvalues.
The query is:
SELECT DISTINCT main.* FROM Tickets
* Dimitri [090511 11:18]:
> Folks, it's completely crazy, but here is what I found:
>
> - if HISTORY table is analyzed with target 1000 my SELECT response
> time is jumping to 3ms, and the max throughput is limited to 6.000 TPS
> (it's what happenned to 8.3.7)
>
> -if HISTORY table is analyzed w
On Mon, 2009-05-11 at 11:23 -0400, Tom Lane wrote:
> Dimitri writes:
> > Anyone may explain me why analyze target may have so huge negative
> > secondary effect?..
>
> If these are simple queries, maybe what you're looking at is the
> increase in planning time caused by having to process 10x as
On Mon, 2009-05-11 at 17:18 +0200, Dimitri wrote:
> Yes, forget, MySQL is reaching 17.500 TPS here.
Please share your measurements of MySQL scalability also.
--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support
--
Sent via pgsql-performance mailing list (
On Mon, May 11, 2009 at 11:18 AM, Dimitri wrote:
> Folks, it's completely crazy, but here is what I found:
>
> - if HISTORY table is analyzed with target 1000 my SELECT response
> time is jumping to 3ms, and the max throughput is limited to 6.000 TPS
> (it's what happenned to 8.3.7)
>
> -if HISTOR
Hi Scott,
good point - the current checkpoint completion target is a default
0.5, and it makes sense to set it to 0.8 to make writing more smooth,
great!
yes, data and xlog are separated, each one is sitting on an
independent storage LUN RAID1, and storage box is enough performant
Thank you!
Rg
OK, it'll be better to avoid a such improvement :-)
Performance - yes, but not for any price :-)
Thank you!
Rgds,
-Dimitri
On 5/11/09, Kevin Grittner wrote:
> Dimitri wrote:
>
>> What about "full_page_writes" ? seems it's "on" by default. Does it
>> makes sense to put if off?..
>
> It would p
On Mon, May 11, 2009 at 10:31 AM, Dimitri wrote:
> Hi Kevin,
>
> PostgreSQL: 8.3.7 & 8.4
> Server: Sun M5000 32cores
> OS: Solaris 10
>
> current postgresql.conf:
>
> #
> max_connections = 2000 # (change requires restart)
> effective_cache_size = 48
Dimitri wrote:
> What about "full_page_writes" ? seems it's "on" by default. Does it
> makes sense to put if off?..
It would probably help with performance, but the description is a
little disconcerting in terms of crash recovery. We tried running
with it off for a while (a year or so back),
Thanks a lot, I'll try them all!
Yes, I have a good external storage box with battery backed cache enabled.
There are 64GB of RAM so I expected it'll help little bit to increase
a buffer cache, but ok, will see if with 256MB it'll be better :-)
What about "full_page_writes" ? seems it's "on" by
Dimitri wrote:
> PostgreSQL: 8.3.7 & 8.4
> Server: Sun M5000 32cores
> OS: Solaris 10
Does that have a battery backed RAID controller? If so, is it
configured for write-back? These both help a lot with smoothing
checkpoint I/O gluts.
We've minimized problems by making the background write
Hi Kevin,
PostgreSQL: 8.3.7 & 8.4
Server: Sun M5000 32cores
OS: Solaris 10
current postgresql.conf:
#
max_connections = 2000 # (change requires restart)
effective_cache_size = 48000MB
shared_buffers = 12000MB
temp_buffers = 200MB
work_mem = 100MB
Hi Tom,
it was not willing :-)
it just stayed so after various probes with a query plan.
Anyway, on 8.4 the default target is 100, an just by move it to 5 I
reached on 16cores 10.500 TPS instead of 8.000 initially. And I think
you have a good reason to keep it equal to 100 by default, isn't it?
Dimitri wrote:
> what may you suggest as the most optimal postgresql.conf to keep
> writing as stable as possible?..
>
> What I want is to avoid "throughput waves" - I want to keep my
> response times stable without any activity holes. I've tried to
> reduce checkpoint timeout from 5min to 30
Hi,
what may you suggest as the most optimal postgresql.conf to keep
writing as stable as possible?..
What I want is to avoid "throughput waves" - I want to keep my
response times stable without any activity holes. I've tried to reduce
checkpoint timeout from 5min to 30sec - it helped, throughpu
Paolo Rizzi wrote:
Are you saying that PostgreSQL+PostGIS can actually run on a
smartphone??? Intriguing...
Did anyone ever actually tried that???
If it's a supported CPU type and you've got a suitable build toolchain,
sure. Seven or eight years ago we were getting a good laugh out of the
fac
Dimitri writes:
> Anyone may explain me why analyze target may have so huge negative
> secondary effect?..
If these are simple queries, maybe what you're looking at is the
increase in planning time caused by having to process 10x as much
statistical data. Cranking statistics_target to the max ju
Folks, it's completely crazy, but here is what I found:
- if HISTORY table is analyzed with target 1000 my SELECT response
time is jumping to 3ms, and the max throughput is limited to 6.000 TPS
(it's what happenned to 8.3.7)
-if HISTORY table is analyzed with target 5 - my SELECT response time
is
A smartphone... you're right, I didn't think of that, but the hardware I
described is very much like the one of a modern smartphone!!!
Are you saying that PostgreSQL+PostGIS can actually run on a
smartphone??? Intriguing...
Did anyone ever actually tried that???
While the performance of A
Are you saying that PostgreSQL+PostGIS can actually run on a
smartphone??? Intriguing...
Did anyone ever actually tried that???
If it's a supported CPU type and you've got a suitable build toolchain,
sure. Seven or eight years ago we were getting a good laugh out of the
fact that you could run
27 matches
Mail list logo