decibel wrote:
On Mar 10, 2009, at 4:12 PM, Steve McLellan wrote:
The server itself is a dual-core 3.7GHz Xeon Dell (each core reporting 2
logical CPUs) running an amd64 build of FreeBSD 6.2, and postgres
8.3.5 built
from source.
Uh, you're running an amd64 build on top of an Intel CPU? I di
On Mar 13, 2009, at 3:02 PM, Jignesh K. Shah wrote:
vmstat seems similar to wakeup some
kthr memorypagedisk
faults cpu
r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy
cs us sy id
63 0 0 45535728 38689856 0 14 0 0 0 0 0 0 0 0 0
On Mar 13, 2009, at 8:05 AM, Gregory Stark wrote:
"Jignesh K. Shah" writes:
Scott Carey wrote:
On 3/12/09 11:37 AM, "Jignesh K. Shah" wrote:
In general, I suggest that it is useful to run tests with a few
different
types of pacing. Zero delay pacing will not have realistic number of
conn
On Mar 12, 2009, at 2:22 PM, Jignesh K. Shah wrote:
Something that might be useful for him to report is the avg number
of active backends for each data point ...
short of doing select * from pg_stat_activity and removing the IDLE
entries, any other clean way to get that information.
Uh, isn
On Mar 11, 2009, at 10:48 PM, Jignesh K. Shah wrote:
Fair enough.. Well I am now appealing to all who has a fairly
decent sized hardware want to try it out and see whether there are
"gains", "no-changes" or "regressions" based on your workload. Also
it will help if you report number of c
On Mar 10, 2009, at 4:12 PM, Steve McLellan wrote:
The server itself is a dual-core 3.7GHz Xeon Dell (each core
reporting 2
logical CPUs) running an amd64 build of FreeBSD 6.2, and postgres
8.3.5 built
from source.
Uh, you're running an amd64 build on top of an Intel CPU? I didn't
think
On Mar 10, 2009, at 12:20 PM, Tom Lane wrote:
f...@redhat.com (Frank Ch. Eigler) writes:
For a prepared statement, could the planner produce *several* plans,
if it guesses great sensitivity to the parameter values? Then it
could choose amongst them at run time.
We've discussed that in the pas
On Mar 9, 2009, at 8:36 AM, Mario Splivalo wrote:
Now, as I was explained on pg-jdbc mailinglist, that 'SET
enable_seqscan TO false' affects all queries on that persistent
connection from tomcat, and It's not good solution. So I wanted to
post here to ask what other options do I have.
FWI
Heikki Linnakangas writes:
> WALInsertLock is also quite high on Jignesh's list. That I've seen
> become the bottleneck on other tests too.
Yeah, that's been seen to be an issue before. I had the germ of an idea
about how to fix that:
... with no lock, determine size of WAL record ...
Robert Haas writes:
> On Fri, Mar 13, 2009 at 10:06 PM, Tom Lane wrote:
>> I assume you meant effective_io_concurrency. We'd still need a special
>> case because the default is currently hard-wired at 1, not 0, if
>> configure thinks the function exists.
> I think 1 should mean no prefetching,
On Wed, 2009-03-11 at 16:53 -0400, Jignesh K. Shah wrote:
> 1200: 2000: Medium Throughput: -1781969.000 Avg Medium Resp: 0.019
I think you need to iron out bugs in your test script before we put too
much stock into the results generated. Your throughput should not be
negative.
I'd be interested
Tom Lane wrote:
Robert Haas writes:
I think that changing the locking behavior is attacking the problem at
the wrong level anyway.
Right. By the time a patch here could have any effect, you've already
lost the game --- having to deschedule and reschedule a process is a
large cost compared to
12 matches
Mail list logo