On 10/07/10 03:54, Kevin Grittner wrote:
Mark Kirkwood wrote:
Purely out of interest, since the old repo is still there, I had a
quick look at measuring the overhead, using 8.4's pgbench to run
two custom scripts: one consisting of a single 'SELECT 1', the
other having 100 'SELECT 1' - the
Mark Kirkwood wrote:
> Purely out of interest, since the old repo is still there, I had a
> quick look at measuring the overhead, using 8.4's pgbench to run
> two custom scripts: one consisting of a single 'SELECT 1', the
> other having 100 'SELECT 1' - the latter being probably the worst
> case
On Fri, Jul 9, 2010 at 12:03 AM, Mark Kirkwood
wrote:
> On 09/07/10 15:57, Robert Haas wrote:
>>
>> Hmm. Well, those numbers seem awfully high, for what you're doing,
>> then. An admission control mechanism that's just letting everything
>> in shouldn't knock 5% off performance (let alone 30%).
On 09/07/10 15:57, Robert Haas wrote:
Hmm. Well, those numbers seem awfully high, for what you're doing,
then. An admission control mechanism that's just letting everything
in shouldn't knock 5% off performance (let alone 30%).
Yeah it does, on the other hand both Josh and I were trying
On Thu, Jul 8, 2010 at 11:00 PM, Mark Kirkwood
wrote:
> On 09/07/10 14:26, Robert Haas wrote:
>>
>> On Thu, Jul 8, 2010 at 10:21 PM, Mark Kirkwood
>> wrote:
>>
>>>
>>> Purely out of interest, since the old repo is still there, I had a quick
>>> look at measuring the overhead, using 8.4's pgbench
On 09/07/10 14:26, Robert Haas wrote:
On Thu, Jul 8, 2010 at 10:21 PM, Mark Kirkwood
wrote:
Purely out of interest, since the old repo is still there, I had a quick
look at measuring the overhead, using 8.4's pgbench to run two custom
scripts: one consisting of a single 'SELECT 1', the oth
On Thu, Jul 8, 2010 at 10:21 PM, Mark Kirkwood
wrote:
> Purely out of interest, since the old repo is still there, I had a quick
> look at measuring the overhead, using 8.4's pgbench to run two custom
> scripts: one consisting of a single 'SELECT 1', the other having 100 'SELECT
> 1' - the latter
On 09/07/10 12:58, Mark Kirkwood wrote:
On 09/07/10 05:10, Josh Berkus wrote:
Simon, Mark,
Actually only 1 lock check per query, but certainly extra processing
and
data structures to maintain the pool information... so, yes certainly
much more suitable for DW (AFAIK we never attempted to meas
On 09/07/10 05:10, Josh Berkus wrote:
Simon, Mark,
Actually only 1 lock check per query, but certainly extra processing and
data structures to maintain the pool information... so, yes certainly
much more suitable for DW (AFAIK we never attempted to measure the
additional overhead for non DW wor
Simon, Mark,
Actually only 1 lock check per query, but certainly extra processing and
data structures to maintain the pool information... so, yes certainly
much more suitable for DW (AFAIK we never attempted to measure the
additional overhead for non DW workload).
I recall testing it when the
On 29/06/10 05:36, Josh Berkus wrote:
Having tinkered with it, I'll tell you that (2) is actually a very
hard problem, so any solution we implement should delay as long as
possible in implementing (2). In the case of Greenplum, what Mark did
originally IIRC was to check against the global me
On Fri, 2010-06-25 at 13:10 -0700, Josh Berkus wrote:
> The problem with centralized resource control
We should talk about the problem of lack of centralized resource control
as well, to balance.
Another well observed problem is that work_mem is user settable, so many
programs acting together wi
On 29/06/10 04:48, Tom Lane wrote:
"Ross J. Reedstrom" writes:
Hmm, I'm suddenly struck by the idea of having a max_cost parameter,
that refuses to run (or delays?) queries that have "too high" a cost.
That's been suggested before, and shot down on the grounds that the
planner's cost
Jesper Krogh wrote:
> I have not hit any issues with the work_mem being too high, but
> I'm absolutely sure that I could flood the system if they happened
> to be working at the same time.
OK, now that I understand your workload, I agree that a connection
pool at the transaction level won't do
On 2010-06-28 21:24, Kevin Grittner wrote:
Jesper Krogh wrote:
Sorry if I'm asking silly questions, but how does transactions and
connection pooler's interact?
That depends a great deal on the pooler and its configuration, as
well as your client architecture. Our shop gathers up t
Jesper Krogh wrote:
> Sorry if I'm asking silly questions, but how does transactions and
> connection pooler's interact?
That depends a great deal on the pooler and its configuration, as
well as your client architecture. Our shop gathers up the
information needed for our database transaction
On 2010-06-25 22:44, Robert Haas wrote:
On Fri, Jun 25, 2010 at 3:52 PM, Kevin Grittner
wrote:
Heck, I think an even *more* trivial admission control policy which
limits the number of active database transactions released to
execution might solve a lot of problems.
That wouldn't hav
Josh Berkus wrote:
> We can go back to Kevin's originally proposed simple feature:
> just allowing the DBA to limit the number of concurrently
> executing queries by role and overall.
Well, that's more sophisticated than what I proposed, but it's an
interesting twist on it.
> This would cons
While this does have the advantage of being relatively simple to
implement, I think it would be a bitch to tune...
Precisely. So, there's a number of issues to solve here:
1) We'd need to add accouting for total memory usage to explain plans
(worth doing on its own, really, even without adm
"Ross J. Reedstrom" writes:
> Hmm, I'm suddenly struck by the idea of having a max_cost parameter,
> that refuses to run (or delays?) queries that have "too high" a cost.
That's been suggested before, and shot down on the grounds that the
planner's cost estimates are not trustworthy enough to rel
On Sat, Jun 26, 2010 at 01:19:57PM -0400, Robert Haas wrote:
>
> I'm not sure. What does seem clear is that it's fundamentally at odds
> with the "admission control" approach Kevin is advocating. When you
> start to run short on a resource (perhaps memory), you have to decide
> between (a) waiti
On Sat, Jun 26, 2010 at 11:59 AM, Martijn van Oosterhout
wrote:
> On Sat, Jun 26, 2010 at 11:37:16AM -0400, Robert Haas wrote:
>> On Sat, Jun 26, 2010 at 11:03 AM, Martijn van Oosterhout
>> > (It doesn't help in situations where you can't accurately predict
>> > memory usage, like hash tables.)
>>
On Sat, Jun 26, 2010 at 11:37:16AM -0400, Robert Haas wrote:
> On Sat, Jun 26, 2010 at 11:03 AM, Martijn van Oosterhout
> > (It doesn't help in situations where you can't accurately predict
> > memory usage, like hash tables.)
>
> Not sure what you mean by this part. We already predict how much
>
On Sat, Jun 26, 2010 at 11:03 AM, Martijn van Oosterhout
wrote:
> On Fri, Jun 25, 2010 at 03:15:59PM -0400, Robert Haas wrote:
>> A
>> refinement might be to try to consider an inferior plan that uses less
>> memory when the system is tight on memory, rather than waiting. But
>> you'd have to be
On Fri, Jun 25, 2010 at 03:15:59PM -0400, Robert Haas wrote:
> A
> refinement might be to try to consider an inferior plan that uses less
> memory when the system is tight on memory, rather than waiting. But
> you'd have to be careful about that, because waiting might be better
> (it's worth wait
Robert Haas wrote:
> Kevin Grittner wrote:
>> Heck, I think an even *more* trivial admission control policy
>> which limits the number of active database transactions released
>> to execution might solve a lot of problems.
>
> That wouldn't have any benefit over what you can already do with a
On Fri, Jun 25, 2010 at 3:52 PM, Kevin Grittner
wrote:
> Heck, I think an even *more* trivial admission control policy which
> limits the number of active database transactions released to
> execution might solve a lot of problems.
That wouldn't have any benefit over what you can already do with
On Fri, Jun 25, 2010 at 4:10 PM, Josh Berkus wrote:
> On 6/25/10 12:15 PM, Robert Haas wrote:
>> I think a good admission control system for memory would be huge for
>> us. There are innumerable threads on pgsql-performance where we tell
>> people to set work_mem to a tiny value (like 4MB or 16MB
Josh Berkus wrote:
> Greenplum did this several years ago with the Bizgres project
> However, it [was not] compatible with OLTP workloads.
> the "poor man's admission control" is a waste of time because it
> doesn't actually help performance. We're basically facing doing
> the hard version,
On 6/25/10 12:15 PM, Robert Haas wrote:
> I think a good admission control system for memory would be huge for
> us. There are innumerable threads on pgsql-performance where we tell
> people to set work_mem to a tiny value (like 4MB or 16MB) because any
> higher value risks driving the machine int
Robert Haas wrote:
> Kevin Grittner wrote:
>> check out section 2.4 of this
> A really trivial admission control system might let you set a
> system-wide limit on work_mem.
Heck, I think an even *more* trivial admission control policy which
limits the number of active database transactions
On Fri, Jun 25, 2010 at 1:33 PM, Kevin Grittner
wrote:
> Recent discussions involving the possible benefits of a connection
> pool for certain users has reminded me of a brief discussion at The
> Royal Oak last month, where I said I would post a reference a
> concept which might alleviate the need
Recent discussions involving the possible benefits of a connection
pool for certain users has reminded me of a brief discussion at The
Royal Oak last month, where I said I would post a reference a
concept which might alleviate the need for external connection
pools. For those interested, check out
Robert Haas wrote:
> Kevin Grittner wrote:
>> The second tier is implemented to run after a plan is chosen, and
>> may postpone execution of a query (or reduce the resources it is
>> allowed) if starting it at that time might overload available
>> resources.
>
> It seems like it might be helpf
Robert Haas wrote:
> It seems like it might be helpful, before tackling what you're
talking
> about here, to have some better tools for controlling resource
> utilization. Right now, the tools we have a pretty crude. You
can't
> even nice/ionice a certain backend without risking priority
inver
On Mon, Dec 28, 2009 at 3:33 PM, Kevin Grittner
wrote:
> They describe a two-tier approach, where the first tier is already
> effectively implemented in PostgreSQL with the max_connections and
> superuser_reserved_connections GUCs. The second tier is implemented
> to run after a plan is chosen, a
Dimitri Fontaine wrote:
> No, in session pooling you get the same backend connection for the
> entire pgbouncer connection, it's a 1-1 mapping.
Right -- so it doesn't allow more logical connections than that with
a limit to how many are active at any one time, *unless* the clients
cooperate by
Le 28 déc. 2009 à 23:56, Kevin Grittner a écrit :
>> http://preprepare.projects.postgresql.org/README.html
>
> I just reviewed the documentation for preprepare -- I can see a use
> case for that, but I really don't think it has a huge overlap with
> my point. The parsing and planning mentioned i
Dimitri Fontaine wrote:
> Le 28 déc. 2009 à 22:59, Kevin Grittner a écrit :
>> (3) With the ACP, the statements would be parsed and optimized
>> before queuing, so they would be "ready to execute" as soon as a
>> connection was freed.
>
> There's a pgfoundry project called preprepare, which ca
Le 28 déc. 2009 à 23:35, Kevin Grittner a écrit :
> So the application would need to open and close a pgbouncer
> connection for each database transaction in order to share the
> backend properly?
No, in session pooling you get the same backend connection for the entire
pgbouncer connection, it's
Dimitri Fontaine wrote:
> That's why there's both transaction and session pooling. The
> benefit of session pooling is to avoid forking backends, reusing
> them instead, and you still get the pooling control.
So the application would need to open and close a pgbouncer
connection for each datab
Le 28 déc. 2009 à 22:59, Kevin Grittner a écrit :
> With my current knowledge of pgbouncer I can't answer that
> definitively; but *if* pgbouncer, when configured for transaction
> pooling, can queue new transaction requests until a connection is
> free, then the differences would be:
It does that
Dimitri Fontaine wrote:
> Le 28 déc. 2009 à 21:33, Kevin Grittner a écrit :
>> We often see posts from people who have more active connections
>> than is efficient.
>
> How would your proposal better solve the problem than using
> pgbouncer?
With my current knowledge of pgbouncer I can't answer
Le 28 déc. 2009 à 22:46, Andres Freund a écrit :
>>
>> I'd be in favor of considering how to get pgbouncer into -core, and now
>> that we have Hot Standby maybe implement a mode in which as soon as a
>> "real" XID is needed, or maybe upon receiving start transaction read write
>> command, the conn
On Monday 28 December 2009 22:39:06 Dimitri Fontaine wrote:
> Hi,
>
> Le 28 déc. 2009 à 21:33, Kevin Grittner a écrit :
> > We often see posts from people who have more active connections than
> > is efficient.
>
> How would your proposal better solve the problem than using pgbouncer?
>
>
> I'd
Hi,
Le 28 déc. 2009 à 21:33, Kevin Grittner a écrit :
> We often see posts from people who have more active connections than
> is efficient.
How would your proposal better solve the problem than using pgbouncer?
I'd be in favor of considering how to get pgbouncer into -core, and now that we
ha
This paper has a brief but interesting discussion of Admission
Control in section 2.4:
Architecture of a Database System. (Joseph M. Hellerstein, Michael
Stonebraker and James Hamilton). Foundations and Trends in Databases
1(2).
http://db.cs.berkeley.edu/papers/fntdb07-architecture.pdf
They d
47 matches
Mail list logo