Wes <[EMAIL PROTECTED]> writes:
> Ok, now I follow. Taking the biggest indexes:
> The weekend before:
> INFO: index "message_recipients_i_recip_date" now contains 393961361 row
> versions in 2435100 pages
> INFO: index "message_recipients_i_message" now contains 393934394 row
> versions in 1499
On 4/5/05 11:15 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> I didn't say it wasn't consistent, just that it doesn't prove the
> point. The speedup you saw could have been from elimination of index
> bloat more than from bringing the index into physically sorted order.
> An estimate of the overall
Wes <[EMAIL PROTECTED]> writes:
> On 4/4/05 8:50 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>> That doesn't follow from what you said. Did you check that the physical
>> sizes of the indexes were comparable before and after the reindex?
> No, how do I do that (or where is it documented how to do i
On 4/4/05 8:50 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> That doesn't follow from what you said. Did you check that the physical
> sizes of the indexes were comparable before and after the reindex?
No, how do I do that (or where is it documented how to do it)?
How is it not consistent? I bel
Wes <[EMAIL PROTECTED]> writes:
> On 3/2/05 10:50 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>> It wouldn't be easy --- there are some locking considerations that say
>> btbulkdelete needs to scan the index in the same order that an ordinary
>> scan would do. See the nbtree README for details.
> J
On 3/2/05 10:50 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> It wouldn't be easy --- there are some locking considerations that say
> btbulkdelete needs to scan the index in the same order that an ordinary
> scan would do. See the nbtree README for details.
Just a follow-up on this..
The vacuum
Well, the good news is that the 2.24.29 kernel solved the kswapd problem.
That bad news is that it didn't help the vacuum time. In fact, the vacuum
time is now over 6 hours instead of 5 hours. Whether that is a direct
result of the 2.24.29 kernel, or a coincidence, I don't know at this time.
I g
On 3/2/05 3:51 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> I was going to suggest
> REINDEXing those indexes to see if that cuts the vacuum time at all.
The problem with that is it takes a very long time. I've got a couple of
things to try yet on the kswapd problem. If that doesn't work, maybe
Wes Palmer <[EMAIL PROTECTED]> writes:
> Any chance of change that
> behavior to scan in physical storage order?
It wouldn't be easy --- there are some locking considerations that say
btbulkdelete needs to scan the index in the same order that an ordinary
scan would do. See the nbtree README for
Wes <[EMAIL PROTECTED]> writes:
> Watching the system as vacuum is running, I can see that we are encountering
> the kswapd/kscand problem in the 2.4.20 kernel. This could very well
> account for the non-linear increase in vacuum time.
Hmm. Looking at the vacuum verbose output you sent me, it's
Watching the system as vacuum is running, I can see that we are encountering
the kswapd/kscand problem in the 2.4.20 kernel. This could very well
account for the non-linear increase in vacuum time.
This problem is fixed in the 2.6 kernel, but we can't upgrade because DELL
is dragging their feet i
On 3/2/05 12:16 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Would you post the complete VACUUM VERBOSE log? The CPU/elapsed time lines
> would help us identify where the time is going.
Mailed.
I do see stats like:
CPU 518.88s/25.17u sec elapsed 10825.33 sec.
CPU 884.96s/64.35u sec elapsed 1379
Wes <[EMAIL PROTECTED]> writes:
> It took 5.2 hours again tonight to do the vacuum. I don't see anything out
> of the ordinary - no explanation for the non-linear increases in vacuum
> time.
Would you post the complete VACUUM VERBOSE log? The CPU/elapsed time lines
would help us identify where t
On 3/2/05 12:16 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Would you post the complete VACUUM VERBOSE log? The CPU/elapsed time lines
> would help us identify where the time is going.
I'll send it to you directly - its rather long.
>> DETAIL: Allocated FSM size: 1000 relations + 100 pages
On 2/28/05 6:53 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Again, VACUUM VERBOSE info would be informative (it's sufficient to look
> at your larger tables for this).
It took 5.2 hours again tonight to do the vacuum. I don't see anything out
of the ordinary - no explanation for the non-linear i
On 2/28/05 6:53 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> It's hard to see how the vacuum time wouldn't
> be linear in table size if there's nothing to do and no dead space.
I am doing 'vacuum analyze' rather than just 'vacuum'. Could that have
anything to do with the non-linear behavior?
Wes
On 2/28/05 6:53 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> If you are suffering bloat, the fastest route to a solution would
> probably be to CLUSTER your larger tables. Although VACUUM FULL
> would work, it's likely to be very slow.
How can there be bloat if there are no deletes or modifies?
Wes <[EMAIL PROTECTED]> writes:
> Why is the vacuum time not going up linearly?
I'm betting that the database is suffering from substantial bloat,
requiring VACUUM to scan through lots of dead space that wasn't there
before. Check your FSM settings (the tail end of the output from a
full-database
18 matches
Mail list logo