On Wed, Feb 08, 2012 at 10:41:43PM -0500, Greg Smith wrote:
> Just trying to set the expectations bar realistically here. I don't
> consider the easier problem of checkpoint smoothing a solved one,
> either. Saying autovacuum needs to reach even that level of
> automation to be a useful improveme
On 02/08/2012 11:22 AM, Bruce Momjian wrote:
Why can't vacuum handle things automatically like checkpoint smoothing?
Why can't it detect when it is falling behind and speed up? Why can't
it see as busy background writer and slow down? Unless we answer
these questions, we are not solving the pr
On Wed, Feb 8, 2012 at 11:22 AM, Bruce Momjian wrote:
> What we have now just isn't cutting it for 99% of our users, and we need
> to address that if we are going to ever make any real headway here.
>
> Why can't vacuum handle things automatically like checkpoint smoothing?
> Why can't it detect w
On Wed, Feb 08, 2012 at 09:56:17AM -0500, Robert Haas wrote:
> > This is all fine, but what does it have to do with the current patch? I
> > mean, if we change vacuum to do some stuff differently, it's still going
> > to have to read and dirty pages and thus account for I/O.
>
> Yeah, I drifted o
On Wed, Feb 8, 2012 at 9:38 AM, Alvaro Herrera
wrote:
> I think that (part of) the underlying problem is that we have no clear
> way to specify "how much I/O do you want autovacuum to use". That's
> what this patch is all about, AFAIU; it has nothing to do with
> monitoring. Right now, as has be
Excerpts from Bruce Momjian's message of mié feb 08 00:58:58 -0300 2012:
> As much as I hate to poo-poo a patch addition, I have to agree with
> Robert Haas on this one. Renaming settings really isn't moving us
> forward. It introduces a migration problem and really doesn't move us
> forward in
On Thu, Jan 19, 2012 at 09:42:52PM -0500, Robert Haas wrote:
> I certainly didn't intend to come across as disparaging your work on
> this topic. I understand that there are big problems with the way
> things work now; I'm just cautious about trying to replace them too
> hastily with something tha
On Thu, Jan 19, 2012 at 05:39:41PM -0500, Greg Smith wrote:
> On 1/19/12 1:10 PM, Robert Haas wrote:
> >I have to say that I find that intensely counterintuitive. The
> >current settings are not entirely easy to tune correctly, but at least
> >they're easy to explain.
>
> I attempt to explain tho
On Sun, Jan 15, 2012 at 12:24 AM, Greg Smith wrote:
> So far the reaction I've gotten from my recent submission to make autovacuum
> log its read/write in MB/s has been rather positive. I've been surprised at
> the unprecedented (to me at least) amount of backporting onto big production
> systems
On Sun, Jan 15, 2012 at 12:24 AM, Greg Smith wrote:
> If you then turn that equation around, making the maximum write rate the
> input, for any given cost delay and dirty page cost you can solve for the
> cost limit--the parameter in fictitious units everyone hates. It works like
> this, with th
On 01/23/2012 11:16 PM, Robert Treat wrote:
I keep thinking Greg has mistaken happiness with the MB based info in
the vacuum patch as being happy without the output format, when really
it is all about increased visibility.
I haven't taken that as anything but evidence I'm at least moving in the
On Mon, Jan 23, 2012 at 3:21 AM, Benedikt Grundmann
wrote:
> On 19/01/12 17:39, Greg Smith wrote:
>> On 1/19/12 1:10 PM, Robert Haas wrote:
>> >I have to say that I find that intensely counterintuitive. The
>> >current settings are not entirely easy to tune correctly, but at least
>> >they're eas
Benedikt Grundmann wrote:
What I think is missing is a clear way to know if you are vacuuming
(and analyzing) enough, and how much you are paying for that.
Any good way to measure if you're vacuuming a particular table enough
needs to note how much free space is in that table and its inde
On 19/01/12 17:39, Greg Smith wrote:
> On 1/19/12 1:10 PM, Robert Haas wrote:
> >I have to say that I find that intensely counterintuitive. The
> >current settings are not entirely easy to tune correctly, but at least
> >they're easy to explain.
>
> If there's anyone out there who has run a large
On Sat, Jan 21, 2012 at 5:54 PM, Jim Nasby wrote:
> What would be interesting is if we could monitor how long all *foreground* IO
> requests took. If they start exceeding some number, that means the system is
> at or near full capacity, and we'd like background stuff to slow down.
There's somet
Jim Nasby wrote:
Your two comments together made me realize something... at the end of the day
people don't care about MB/s. They care about impact to other read and write
activity in the database.
What would be interesting is if we could monitor how long all *foreground* IO
requests took. If
On Jan 19, 2012, at 4:23 PM, Greg Smith wrote:
> On 1/18/12 4:18 PM, Jim Nasby wrote:
>> What about doing away with all the arbitrary numbers completely, and just
>> state data rate limits for hit/miss/dirty?
>
> Since many workloads will have a mix of all three, it still seems like
> there's so
On Fri, Jan 20, 2012 at 12:35 PM, Greg Smith wrote:
> On 01/20/2012 10:37 AM, Robert Haas wrote:
>>
>> On Thu, Jan 19, 2012 at 11:29 PM, Greg Smith wrote:
>>>
>>> vacuum_cost_page_hit = 0.1
>>>
>>> vacuum_cost_page_miss = 1.0
>>> vacuum_cost_page_dirty = 2.0
>>>
>>> Now add in the new setting, wh
On 01/20/2012 10:37 AM, Robert Haas wrote:
On Thu, Jan 19, 2012 at 11:29 PM, Greg Smith wrote:
vacuum_cost_page_hit = 0.1
vacuum_cost_page_miss = 1.0
vacuum_cost_page_dirty = 2.0
Now add in the new setting, which is explicitly said to be the read value:
vacuum_cost_read_limit = 8000 # maximum
On Thu, Jan 19, 2012 at 11:29 PM, Greg Smith wrote:
> I chewed a bit on Heikki's comment that similarity to the query planning
> parameters might be useful, and Robert's that being able to explain how the
> feature works more easily has value. I have an initial adjustment of my
> general idea tha
I chewed a bit on Heikki's comment that similarity to the query planning
parameters might be useful, and Robert's that being able to explain how
the feature works more easily has value. I have an initial adjustment
of my general idea that I think moves usefully in both those directions.
The e
On Thu, Jan 19, 2012 at 5:39 PM, Greg Smith wrote:
> On 1/19/12 1:10 PM, Robert Haas wrote:
>> I have to say that I find that intensely counterintuitive. The
>> current settings are not entirely easy to tune correctly, but at least
>> they're easy to explain.
>
> I attempt to explain those settin
On 1/19/12 1:10 PM, Robert Haas wrote:
I have to say that I find that intensely counterintuitive. The
current settings are not entirely easy to tune correctly, but at least
they're easy to explain.
I attempt to explain those settings to people in training classes about
once a month. It's nev
On 1/18/12 4:18 PM, Jim Nasby wrote:
What about doing away with all the arbitrary numbers completely, and just state
data rate limits for hit/miss/dirty?
Since many workloads will have a mix of all three, it still seems like
there's some need for weighing these individually, even if they each
Excerpts from Simon Riggs's message of jue ene 19 16:05:36 -0300 2012:
> On Sun, Jan 15, 2012 at 9:17 PM, Heikki Linnakangas
> wrote:
>
> > I think it makes more sense to use the max read rate as the main knob,
> > rather than write rate. That's because the max read rate is higher than the
> > w
On Sun, Jan 15, 2012 at 9:17 PM, Heikki Linnakangas
wrote:
> I think it makes more sense to use the max read rate as the main knob,
> rather than write rate. That's because the max read rate is higher than the
> write rate, when you don't need to dirty pages. Or do you think saturating
> the I/O
On Sun, Jan 15, 2012 at 4:17 PM, Heikki Linnakangas
wrote:
> +1. I've been thinking we should do that for a long time, but haven't gotten
> around to it.
>
> I think it makes more sense to use the max read rate as the main knob,
> rather than write rate. That's because the max read rate is higher
On Jan 18, 2012, at 3:49 AM, Greg Smith wrote:
> On 01/17/2012 09:00 PM, Jim Nasby wrote:
>> Could we expose both?
>>
>> On our systems writes are extremely cheap... we don't do a ton of them
>> (relatively speaking), so they tend to just fit into BBU cache. Reads on the
>> other hard are a lot
On 01/17/2012 09:00 PM, Jim Nasby wrote:
Could we expose both?
On our systems writes are extremely cheap... we don't do a ton of them
(relatively speaking), so they tend to just fit into BBU cache. Reads on the
other hard are a lot more expensive, at least if they end up actually hitting
disk
On Jan 15, 2012, at 8:13 PM, Greg Smith wrote:
> On 01/15/2012 04:17 PM, Heikki Linnakangas wrote:
>> I think it makes more sense to use the max read rate as the main knob,
>> rather than write rate. That's because the max read rate is higher than the
>> write rate, when you don't need to dirty p
On 01/15/2012 04:17 PM, Heikki Linnakangas wrote:
I think it makes more sense to use the max read rate as the main knob,
rather than write rate. That's because the max read rate is higher
than the write rate, when you don't need to dirty pages. Or do you
think saturating the I/O system with wri
On 15.01.2012 10:24, Greg Smith wrote:
That got me thinking: if MB/s is what everyone wants to monitor, can we
provide a UI to set these parameters that way too? The attached patch is
a bit rough still, but it does that. The key was recognizing that the
cost delay plus cost limit can be converted
So far the reaction I've gotten from my recent submission to make
autovacuum log its read/write in MB/s has been rather positive. I've
been surprised at the unprecedented (to me at least) amount of
backporting onto big production systems it's gotten. There is a whole
lot of pent up frustratio
33 matches
Mail list logo