>> That's what I want to believe. But picture if you have, say a
>> 1-terabyte table which is 50% dead tuples and you don't have a spare
>> 1-terabytes to rewrite the whole table.
>
>But trying to VACUUM FULL that table is going to be horridly painful
>too, and you'll still have bloated indexes aft
On Thu, Sep 03, 2009 at 07:57:25PM -0400, Andrew Dunstan wrote:
> daveg wrote:
> >On Tue, Sep 01, 2009 at 07:42:56PM -0400, Tom Lane wrote:
> >>I'm having a hard time believing that VACUUM FULL really has any
> >>interesting use-case anymore.
> >
> >I have a client who uses temp tables heavily, hun
daveg wrote:
On Tue, Sep 01, 2009 at 07:42:56PM -0400, Tom Lane wrote:
Greg Stark writes:
On Wed, Sep 2, 2009 at 12:01 AM, Alvaro
Herrera wrote:
The use cases where VACUUM FULL wins currently are where storing two
copies of the table and its indexes concurrently just isn't pr
On Tue, Sep 01, 2009 at 07:42:56PM -0400, Tom Lane wrote:
> Greg Stark writes:
> > On Wed, Sep 2, 2009 at 12:01 AM, Alvaro
> > Herrera wrote:
> >>> The use cases where VACUUM FULL wins currently are where storing two
> >>> copies of the table and its indexes concurrently just isn't practical.
> >>
On Wed, Sep 2, 2009 at 8:45 PM, Tom Lane wrote:
> Greg Stark writes:
>> The backwards scan is awful for rotating media. The reading from the
>> end and writing to the beginning is bad too, though hopefully the
>> cache can help that.
>
> Yeah. And all that pales in comparison to what happens in t
On Wed, Sep 2, 2009 at 11:55 PM, Ron Mayer wrote:
> Yet when I try it now, I'm having trouble making it work.
> Would you expect the ctid to be going down in the psql session
> shown below? I wonder why it isn't.
Even before HOT we preferentially tried to put updated tuples on the
same page they
Robert Haas wrote:
> On Tue, Sep 1, 2009 at 9:29 PM, Alvaro
> Herrera wrote:
>> Ron Mayer wrote:
>>> Greg Stark wrote:
That's what I want to believe. But picture if you have, say a
1-terabyte table which is 50% dead tuples and you don't have a spare
1-terabytes to rewrite the whole t
Tom Lane escribió:
> Alvaro Herrera writes:
> > Tom Lane escribi�:
> >> I don't find a lot wrong with that. The code defines its purpose as
> >> being to shorten the table file length. Once it hits a page that
> >> can't be emptied, it cannot shorten the file any further, so why
> >> shouldn't i
Alvaro Herrera writes:
> Tom Lane escribió:
>> I don't find a lot wrong with that. The code defines its purpose as
>> being to shorten the table file length. Once it hits a page that
>> can't be emptied, it cannot shorten the file any further, so why
>> shouldn't it stop?
> All that work, and i
Tom Lane escribió:
> Alvaro Herrera writes:
> > Another weird consequence of this is that it bails out if it finds a
> > tuple larger than it can fit in one of the earlier pages; if there's
> > dead space to be compacted before that, it's not compacted.
>
> I don't find a lot wrong with that. Th
Alvaro Herrera writes:
> Another weird consequence of this is that it bails out if it finds a
> tuple larger than it can fit in one of the earlier pages; if there's
> dead space to be compacted before that, it's not compacted.
I don't find a lot wrong with that. The code defines its purpose as
b
On Wed, Sep 2, 2009 at 3:30 PM, Greg Stark wrote:
> On Wed, Sep 2, 2009 at 8:10 PM, Robert Haas wrote:
>> I confess to being a little fuzzy on the details of how this
>> implementation (seq-scanning the source table for live tuples) is
>> different/better from the current VACUUM FULL implementation
Tom Lane escribió:
> Greg Stark writes:
> > It scans pages *backwards* from the end (which does wonderful things
> > on rotating media). Marks each live tuple it finds as "moved off",
> > finds a new place for it (using the free space map I think?).
>
> BTW, VACUUM FULL doesn't use the free space
Greg Stark writes:
> It scans pages *backwards* from the end (which does wonderful things
> on rotating media). Marks each live tuple it finds as "moved off",
> finds a new place for it (using the free space map I think?).
BTW, VACUUM FULL doesn't use the free space map --- that code predates
the
Greg Stark writes:
> The backwards scan is awful for rotating media. The reading from the
> end and writing to the beginning is bad too, though hopefully the
> cache can help that.
Yeah. And all that pales in comparison to what happens in the indexes.
You have to insert index entries (retail) fo
On Wed, Sep 2, 2009 at 8:10 PM, Robert Haas wrote:
> I confess to being a little fuzzy on the details of how this
> implementation (seq-scanning the source table for live tuples) is
> different/better from the current VACUUM FULL implementation. Can
> someone fill me in?
VACUUM FULL is a *lot* m
On Wed, Sep 2, 2009 at 2:54 PM, Tom Lane wrote:
> Robert Haas writes:
>> So I have a script that goes and finds bloated tables and runs VACUUM
>> FULL on them in the middle of the night if the bloat passes a certain
>> threshold. The tables are small enough and the number of users is low
>> enoug
On Wed, Sep 2, 2009 at 2:31 PM, Tom Lane wrote:
> Greg Stark writes:
>> On Wed, Sep 2, 2009 at 6:41 PM, Josh Berkus wrote:
Perhaps we should go one version with a enable_legacy_full_vacuum
which defaults to off. That would at least let us hear about use cases
where people are unhapp
On Wed, Sep 2, 2009 at 6:57 PM, Kevin
Grittner wrote:
> Greg Stark wrote:
>
>> I don't think we want to cluster on the primary key. I think we just
>> want to rewrite the table keeping the same physical ordering.
>
> Well if that's what you want to do, couldn't you do something like?:
>
> Lock the
Robert Haas writes:
> So I have a script that goes and finds bloated tables and runs VACUUM
> FULL on them in the middle of the night if the bloat passes a certain
> threshold. The tables are small enough and the number of users is low
> enough that this doesn't cause any problems for me. I'm OK
On Wed, Sep 2, 2009 at 1:52 PM, Greg Stark wrote:
> We could deal with the admin scripts by making VACUUM FULL do the new
> behaviour. But I actually don't really like that. I wold prefer to
> break VACUUM FULL since anyone doing it routinely is probably
> mistaken.
So I have a script that goes an
Greg Stark writes:
> On Wed, Sep 2, 2009 at 6:41 PM, Josh Berkus wrote:
>>> Perhaps we should go one version with a enable_legacy_full_vacuum
>>> which defaults to off. That would at least let us hear about use cases
>>> where people are unhappy with a replacement.
>>
>> I think we do need to do
On Wed, 2009-09-02 at 11:01 -0700, Josh Berkus wrote:
> Greg,
>
> > I don't think we want to cluster on the primary key. I think we just
> > want to rewrite the table keeping the same physical ordering.
>
> Agreed.
Are we sure about that? I would argue that the majority of users out
their (think
Greg,
> I don't think we want to cluster on the primary key. I think we just
> want to rewrite the table keeping the same physical ordering.
Agreed.
> Well I've certainly seen people whose disks are more than 50% full.
> They tend to be the same people who want to compact their tables. I
> can't
Greg Stark wrote:
> I don't think we want to cluster on the primary key. I think we just
> want to rewrite the table keeping the same physical ordering.
Well if that's what you want to do, couldn't you do something like?:
Lock the table.
Prop all indexes
Pass the heap with two pointers, one
On Wed, Sep 2, 2009 at 6:41 PM, Josh Berkus wrote:
> All,
>
>
>> I'm having a hard time believing that VACUUM FULL really has any
>> interesting use-case anymore.
>
> Basically, for:
> a) people who don't understand CLUSTER (easily fixed, simply create a
> VACUUM FULL command which just does CLUSTE
On Wed, 2009-09-02 at 10:41 -0700, Josh Berkus wrote:
> All,
>
>
> > I'm having a hard time believing that VACUUM FULL really has any
> > interesting use-case anymore.
>
> Basically, for:
> a) people who don't understand CLUSTER (easily fixed, simply create a
> VACUUM FULL command which just doe
All,
> I'm having a hard time believing that VACUUM FULL really has any
> interesting use-case anymore.
Basically, for:
a) people who don't understand CLUSTER (easily fixed, simply create a
VACUUM FULL command which just does CLUSTER on the primary key)
b) people who are completely out of space
On Wed, Sep 2, 2009 at 6:30 AM, Jaime
Casanova wrote:
> On Tue, Sep 1, 2009 at 9:55 PM, Robert Haas wrote:
>>
>> I'm a bit skeptical about partitioning as a solution, too. The
>> planner is just not clever enough with partitioned tables, yet.
Yeah, we need to fix that :)
I think we're already re
On Tue, Sep 1, 2009 at 9:55 PM, Robert Haas wrote:
>
> I'm a bit skeptical about partitioning as a solution, too. The
> planner is just not clever enough with partitioned tables, yet.
>
analyze and vacuum a *very* big table and even scan a huge index is
not a joke neither...
and yes the planner i
On Tue, Sep 1, 2009 at 19:34, Greg Stark wrote:
> On Wed, Sep 2, 2009 at 12:01 AM, Alvaro
> Herrera wrote:
> >> The use cases where VACUUM FULL wins currently are where storing two
> >> copies of the table and its indexes concurrently just isn't practical.
> >
> > Yeah, but then do you really nee
On Tue, Sep 1, 2009 at 10:58 PM, Alvaro
Herrera wrote:
> Robert Haas escribió:
>> On Tue, Sep 1, 2009 at 7:42 PM, Tom Lane wrote:
>
>> > But trying to VACUUM FULL that table is going to be horridly painful
>> > too, and you'll still have bloated indexes afterwards. You might as
>> > well just live
Robert Haas escribió:
> On Tue, Sep 1, 2009 at 7:42 PM, Tom Lane wrote:
> > But trying to VACUUM FULL that table is going to be horridly painful
> > too, and you'll still have bloated indexes afterwards. You might as
> > well just live with the 50% waste, especially since if you did a
> > full-ta
On Tue, Sep 1, 2009 at 9:29 PM, Alvaro
Herrera wrote:
> Ron Mayer wrote:
>> Greg Stark wrote:
>> >
>> > That's what I want to believe. But picture if you have, say a
>> > 1-terabyte table which is 50% dead tuples and you don't have a spare
>> > 1-terabytes to rewrite the whole table.
>>
>> Could on
On Tue, Sep 1, 2009 at 7:42 PM, Tom Lane wrote:
> Greg Stark writes:
>> On Wed, Sep 2, 2009 at 12:01 AM, Alvaro
>> Herrera wrote:
The use cases where VACUUM FULL wins currently are where storing two
copies of the table and its indexes concurrently just isn't practical.
>>>
>>> Yeah, but
Ron Mayer wrote:
> Greg Stark wrote:
> >
> > That's what I want to believe. But picture if you have, say a
> > 1-terabyte table which is 50% dead tuples and you don't have a spare
> > 1-terabytes to rewrite the whole table.
>
> Could one hypothetically do
>update bigtable set pk = pk where ct
Greg Stark wrote:
>
> That's what I want to believe. But picture if you have, say a
> 1-terabyte table which is 50% dead tuples and you don't have a spare
> 1-terabytes to rewrite the whole table.
Could one hypothetically do
update bigtable set pk = pk where ctid in (select ctid from bigtable
Greg Stark writes:
> On Wed, Sep 2, 2009 at 12:01 AM, Alvaro
> Herrera wrote:
>>> The use cases where VACUUM FULL wins currently are where storing two
>>> copies of the table and its indexes concurrently just isn't practical.
>>
>> Yeah, but then do you really need to use VACUUM FULL? If that's
On Wed, Sep 2, 2009 at 12:01 AM, Alvaro
Herrera wrote:
>> The use cases where VACUUM FULL wins currently are where storing two
>> copies of the table and its indexes concurrently just isn't practical.
>
> Yeah, but then do you really need to use VACUUM FULL? If that's really
> a problem then there
Greg Stark wrote:
> The use cases where VACUUM FULL wins currently are where storing two
> copies of the table and its indexes concurrently just isn't practical.
Yeah, but then do you really need to use VACUUM FULL? If that's really
a problem then there ain't that many dead tuples around.
> Als
On Tue, Sep 1, 2009 at 2:58 PM, Tom Lane wrote:
> We get beat up on a regular basis about "spikes" in response time;
> why would you want to have vacuum creating one when it doesn't need to?
Isn't this sync commit just going to do the same thing that the wal
writer is going to do in at most 200ms
Simon Riggs writes:
> On Tue, 2009-09-01 at 09:58 -0400, Tom Lane wrote:
>> We get beat up on a regular basis about "spikes" in response time;
>> why would you want to have vacuum creating one when it doesn't need
>> to?
> If one I/O on a background utility can cause such a spike, we are in
> ser
On Tue, 2009-09-01 at 09:58 -0400, Tom Lane wrote:
> We get beat up on a regular basis about "spikes" in response time;
> why would you want to have vacuum creating one when it doesn't need
> to?
If one I/O on a background utility can cause such a spike, we are in
serious shitake. I would be mor
Simon Riggs writes:
> VACUUM does so many things that I'd rather have it all safely on disk.
> I'd feel happier with the rule "VACUUM always sync commits", so we all
> remember it and can rely upon it to be the same from release to release.
Non-FULL vacuum has *never* done a sync commit, except i
On Mon, 2009-08-31 at 18:53 -0400, Alvaro Herrera wrote:
> Regarding sync commits that previously happen and now won't, I think the
> only case worth worrying about is the one in vacuum.c. Do we need a
> ForceSyncCommit() in there? I'm not sure if vacuum itself already
> forces sync commit.
VA
Alvaro Herrera writes:
> Tom Lane wrote:
>> Hmm, I had been assuming we wouldn't need that anymore.
> The comment in user.c and dbcommands.c says [...]
> so I think those ones are still necessary.
Yeah, after a look through the code I think you can trust the associated
comments: if it says it ne
Tom Lane wrote:
> Alvaro Herrera writes:
> > Regarding sync commits that previously happen and now won't, I think the
> > only case worth worrying about is the one in vacuum.c. Do we need a
> > ForceSyncCommit() in there? I'm not sure if vacuum itself already
> > forces sync commit.
>
> Hmm, I
Alvaro Herrera writes:
> This patch removes flatfiles.c for good.
Aw, you beat me to it.
> Regarding sync commits that previously happen and now won't, I think the
> only case worth worrying about is the one in vacuum.c. Do we need a
> ForceSyncCommit() in there? I'm not sure if vacuum itself
This patch removes flatfiles.c for good.
It doesn't change the keeping of locks in dbcommands.c and user.c,
because at least some of them are still required.
Regarding sync commits that previously happen and now won't, I think the
only case worth worrying about is the one in vacuum.c. Do we need
49 matches
Mail list logo