Bob Dusek wrote:
>>> The problem with our "cheap" connection pool is that the persistent
>>> connections don't seem to be available immediately after they're
>>> released by the previous process. pg_close doesn't seem to help the
>>> situation. We understand that pg_close doesn't really close a
Bob Dusek wrote:
> So, pgBouncer is pretty good. It doesn't appear to be as good as
> limiting TCON and using pconnect, but since we can't limit TCON in a
> production environment, we may not have a choice.
It may be worth looking into pgpool, as well. If you have a very
cheap-to-connect-to loca
Robert Haas wrote:
> I'm kind of surprised that there are disk I/O subsystems that are so
> bad that a single thread doing non-stop I/O can take down the whole
> server. Is that normal?
No.
> Does it happen on non-Windows operating
> systems?
Yes. My 3ware 8500-8 on a Debian Sarge box was so
>> The problem with our "cheap" connection pool is that the persistent
>> connections don't seem to be available immediately after they're
>> released by the previous process. pg_close doesn't seem to help the
>> situation. We understand that pg_close doesn't really close a
>> persistent connect
"Jorge Montero" writes:
> The killer seems to be the row aggregation. There are about 95K
> different values of sku_id in the sales table, and even the best
> seller items are a very small percentage of all rows, so expecting the
> aggregation to consolidate the rows 50:1 like it does in one of th
I'm having some performance problems in a few sales reports running on postgres
8.3, running on Redhat 4.1.2. The hardware is a bit old, but it performs well
enough. The reports are the typical sales reporting fare: Gather the sales of a
time period based some criteria, aggregate them by product
On Wed, Jan 13, 2010 at 1:10 PM, Bob Dusek wrote:
> And, we pretty much doubled our capacity... from approx 40 "requests"
> per second to approx 80.
Excellent!
> The problem with our "cheap" connection pool is that the persistent
> connections don't seem to be available immediately after they're
FYI - We have implemented a number of changes...
a) some query and application optimizations
b) connection pool (on the cheap: set max number of clients on
Postgres server and created a blocking wrapper to pg_pconnect that
will block until it gets a connection)
c) moved the application server to a
Greg, I will post more detailed data as soon as I'm able to gather it.
I was trying out if the cancellation of the ALTER cmd worked ok, I might
give the ALTER another try, and see how much CPU, RAM and IO usage gets
involved. I will be doing this monitoring with the process explorer from
sysintern
Robert Haas wrote:
I'm kind of surprised that there are disk I/O subsystems that are so
bad that a single thread doing non-stop I/O can take down the whole
server. Is that normal? Does it happen on non-Windows operating
systems? What kind of hardware should I not buy to make sure this
doesn't
On Wed, Jan 13, 2010 at 10:54 AM, Eduardo Piombino wrote:
>
>> OK, I'm not entirely sure this table is not still locking something
>> else. If you make a copy by doing something like:
>>
>> select * into test_table from a;
>>
>> and then alter test_table do you still get the same problems? If so
> OK, I'm not entirely sure this table is not still locking something
> else. If you make a copy by doing something like:
>
> select * into test_table from a;
>
> and then alter test_table do you still get the same problems? If so,
> then it is an IO issue, most likely. If not, then there is som
On Wed, Jan 13, 2010 at 11:53 AM, Tom Lane wrote:
> Robert Haas writes:
>> Yeah. My question is whether it's acceptable to add an extra line to
>> the EXPLAIN output for every hash join, even w/o ANALYZE.
>
> We could add it if either VERBOSE or ANALYZE appears. Not sure if
> that's just too mu
Robert Haas writes:
> Yeah. My question is whether it's acceptable to add an extra line to
> the EXPLAIN output for every hash join, even w/o ANALYZE.
We could add it if either VERBOSE or ANALYZE appears. Not sure if
that's just too much concern for backwards compatibility, though.
On Tue, Jan 12, 2010 at 9:59 PM, Eduardo Piombino wrote:
...
> Now, with this experience, I tried a simple workaround.
> Created an empty version of "a" named "a_empty", identical in every sense.
> renamed "a" to "a_full", and "a_empty" to "a". This procedure costed me like
> 0 seconds of downtim
On Wed, Jan 13, 2010 at 11:14 AM, Jaime Casanova
wrote:
> On Wed, Jan 13, 2010 at 11:11 AM, Robert Haas wrote:
>> Well, what about when we're just doing EXPLAIN, not EXPLAIN ANALYZE?
>> It'll add another line to the output for the expected number of
>> batches.
>
> and when we are in EXPLAIN ANAL
On Wed, Jan 13, 2010 at 2:03 AM, Eduardo Piombino wrote:
> Excellent, lots of useful information in your message.
> I will follow your advices, and keep you posted on any progress. I have yet
> to confirm you with some technical details of my setup, but I'm pretty sure
> you hit the nail in any ca
On Wed, Jan 13, 2010 at 11:11 AM, Robert Haas wrote:
>
> Well, what about when we're just doing EXPLAIN, not EXPLAIN ANALYZE?
> It'll add another line to the output for the expected number of
> batches.
>
and when we are in EXPLAIN ANALYZE the real number as well?
--
Atentamente,
Jaime Casanova
On Wed, Jan 13, 2010 at 10:42 AM, Tom Lane wrote:
> Robert Haas writes:
>> I had an idea at one point of making explain show the planned and
>> actual # of batches for each hash join. I believe that "actual # of
>> batches > 1" is isomorphic to "hash join went to disk". The code is
>> actually
With that said, I assume my current version of pgsql DOES make all this
heavy work go through WAL logging.
Curious thing is that I remember (of course) reviewing logs of the crash
times, and I didn't see anything strange, not even the famous warning "you
are making checkpoints too often. maybe you
Robert Haas writes:
> I had an idea at one point of making explain show the planned and
> actual # of batches for each hash join. I believe that "actual # of
> batches > 1" is isomorphic to "hash join went to disk". The code is
> actually pretty easy; the hard part is figuring out what to do abo
On Wed, Jan 13, 2010 at 1:31 AM, Jaime Casanova
wrote:
> On Mon, Jan 11, 2010 at 3:18 PM, Tom Lane wrote:
>>
>> Hmm. Not clear where the temp files are coming from, but it's *not* the
>> sort --- the "internal sort ended" line shows that that sort never went
>> to disk. What kind of plan is fee
Jaime Casanova writes:
> why we don't show some of that info in explain?
Lack of round tuits; plus concern about breaking programs that read
EXPLAIN output, which I guess will be alleviated in 8.5.
> the reason i say "most of the temp files" is that when i removed
> #ifdef HJDEBUG it says that i
Eduardo Piombino escreveu:
> Maybe it does not get logged at all until the ALTER is completed?
>
This feature [1] was implemented a few months ago and it will be available
only in the next PostgreSQL version (8.5).
[1] http://archives.postgresql.org/pgsql-committers/2009-11/msg00018.php
--
E
On 13/01/2010 3:03 PM, Eduardo Piombino wrote:
One last question, this IO issue I'm facing, do you think it is just a
matter of RAID configuration speed, or a matter of queue gluttony (and
not leaving time for other processes to get into the IO queue in a
reasonable time)?
Hard to say with the
Yes, one of the things I will do asap is to migrate to the latest version.
On other occasion I went through the checkpoint parameters you mentioned,
but left them untouched since they seemed logical.
I'm a little reluctant of changing the checkpoint configuration just to let
me do a -once in a lif
26 matches
Mail list logo