Tom, On 3/8/06 7:21 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes: >> 1. Earlier we had some results that showed that the heapsorts got slower >> when work_mem was higher and that concerns me most of all right now. > > Fair enough, but that's completely independent of the merge algorithm. > (I don't think the Nyberg results necessarily apply to our situation > anyway, as we are not sorting arrays of integers, and hence the cache > effects are far weaker for us. I don't mind trying alternate sort Even with the indirection, we should investigate alternative approaches that others have demonstrated to be superior WRT L2 cache use. A major commercial database currently performs external sorts of various fields 4 times faster, and commonly uses more than 256MB of sort memory in one example case to do it. > I think this would be extremely dangerous, as it would encourage > processes to take more than their fair share of available resources. I agree - in fact, we currently have no structured concept of "fair share of available resources", nor a way to share them. I think the answer to this should involve the use of statement queuing and resource queues. > Also, to the extent that you believe the problem is insufficient L2 > cache, it seems increasing work_mem to many times the size of L2 will > always be counterproductive. (Certainly there is no value in increasing > work_mem until we are in a regime where it consistently improves > performance significantly, which it seems we aren't yet.) Not if you cache block, the optimization that operates on a block of memory one L2 block in size at a time. - Luke ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend