On Wed, Apr 13, 2011 at 12:24:06PM -0400, Tom Lane wrote:
> > Interesting the original index tickets5 is still used for
> > int4eq(main.effectiveid, main.id), no need to build a different.
>
> Well, no, it won't be. This hack is entirely dependent on the fact that
> the optimizer mostly works wit
> On Thu, Apr 14, 2011 at 1:26 AM, Tomas Vondra wrote:
>> Workload A: Touches just a very small portion of the database, to the
>> 'active' part actually fits into the memory. In this case the cache hit
>> ratio can easily be close to 99%.
>>
>> Workload B: Touches large portion of the database, s
* Jesper Krogh:
> If you have a 1 socket system, all of your data can be fetched from
> "local" ram seen from you cpu, on a 2 socket, 50% of your accesses
> will be "way slower", 4 socket even worse.
There are non-NUMA multi-socket systems, so this doesn't apply in all
cases. (The E5320-based sy
2011/4/14 Florian Weimer :
> * Jesper Krogh:
>
>> If you have a 1 socket system, all of your data can be fetched from
>> "local" ram seen from you cpu, on a 2 socket, 50% of your accesses
>> will be "way slower", 4 socket even worse.
>
> There are non-NUMA multi-socket systems, so this doesn't appl
2011/4/14 Tom Lane :
> Nathan Boley writes:
>> FWIW, awhile ago I wrote a simple script to measure this and found
>> that the *actual* random_page / seq_page cost ratio was much higher
>> than 4/1.
>
> That 4:1 ratio is based on some rather extensive experimentation that
> I did back in 2000. In
=?iso-8859-1?Q?V=E1clav_Ovs=EDk?= writes:
> I'm not certain about your sentence touching int4eq() and index. The
> execution plan as show in my previous mail contains information about
> using index tickets5:
>-> Index Scan using tickets5 on tickets main
> (cost=0.00..4.38 rows
On 4/13/11 9:23 PM, "Greg Smith" wrote:
>Scott Carey wrote:
>> If postgres is memory bandwidth constrained, what can be done to reduce
>> its bandwidth use?
>>
>> Huge Pages could help some, by reducing page table lookups and making
>> overall access more efficient.
>> Compressed pages (speedy
On Thu, Apr 14, 2011 at 10:05 PM, Scott Carey wrote:
> Huge Pages helps caches.
> Dual-Pivot quicksort is more cache friendly and is _always_ equal to or
> faster than traditional quicksort (its a provably improved algorithm).
If you want a cache-friendly sorting algorithm, you need mergesort.
I
On 4/14/11 1:19 PM, "Claudio Freire" wrote:
>On Thu, Apr 14, 2011 at 10:05 PM, Scott Carey
>wrote:
>> Huge Pages helps caches.
>> Dual-Pivot quicksort is more cache friendly and is _always_ equal to or
>> faster than traditional quicksort (its a provably improved algorithm).
>
>If you want a ca
On Fri, Apr 15, 2011 at 12:42 AM, Scott Carey wrote:
> I do know that dual-pivot quicksort provably causes fewer swaps (but the
> same # of compares) as the usual single-pivot quicksort. And swaps are a
> lot slower than you would expect due to the effects on processor caches.
> Therefore it migh
10 matches
Mail list logo