On Mon, Jul 14, 2014 at 12:19 PM, Andres Freund wrote:
> On 2014-07-14 11:24:26 -0400, Robert Haas wrote:
>> On Sun, Jul 13, 2014 at 2:23 PM, Andres Freund
>> wrote:
>> > The actual if (lock != NULL) bit costs significant amounts of cycles?
>> > I'd have assumed that branch prediction takes care
On 2014-07-14 11:24:26 -0400, Robert Haas wrote:
> On Sun, Jul 13, 2014 at 2:23 PM, Andres Freund wrote:
> > The actual if (lock != NULL) bit costs significant amounts of cycles?
> > I'd have assumed that branch prediction takes care of that. Or is it
> > actually the icache not keeping up? Did yo
On Sun, Jul 13, 2014 at 2:23 PM, Andres Freund wrote:
> The actual if (lock != NULL) bit costs significant amounts of cycles?
> I'd have assumed that branch prediction takes care of that. Or is it
> actually the icache not keeping up? Did you measure icache vs. dcache
> misses?
> Have you played w
Hi Robert,
On 2014-07-07 15:57:00 -0400, Robert Haas wrote:
> 1. I tried to write a single allocator which would be better than
> aset.c in two ways: first, by allowing allocation from dynamic shared
> memory; and second, by being more memory-efficient than aset.c.
> Heikki told me at PGCon that h
On Fri, Jul 11, 2014 at 11:15 PM, Robert Haas wrote:
> On Thu, Jul 10, 2014 at 1:05 AM, Amit Kapila
wrote:
> > If there is an noticeable impact, then do you think having separate
> > file/infrastructure for parallel sort can help, basically non-parallel
> > and parallel sort will have some common
On Thu, Jul 10, 2014 at 1:05 AM, Amit Kapila wrote:
> On Tue, Jul 8, 2014 at 1:27 AM, Robert Haas wrote:
>> 6. In general, I'm worried that it's going to be hard to keep the
>> overhead of parallel sort from leaking into the non-parallel case.
>> With the no-allocator approach, every place that u
On Tue, Jul 8, 2014 at 1:27 AM, Robert Haas wrote:
>
> 6. In general, I'm worried that it's going to be hard to keep the
> overhead of parallel sort from leaking into the non-parallel case.
> With the no-allocator approach, every place that uses
> GetMemoryChunkSpace() or repalloc() or pfree() wil
On Mon, Jul 7, 2014 at 7:29 PM, Peter Geoghegan wrote:
>> I do think that's a problem with our sort implementation, but it's not
>> clear to me whether it's *more* of a problem for parallel sort than it
>> is for single-backend sort.
>
> If you'll forgive me for going on about my patch on this thr
On Mon, Jul 7, 2014 at 7:04 PM, Robert Haas wrote:
> The testing I did showed about a 5% overhead on REINDEX INDEX
> pgbench_accounts_pkey from one extra tuple copy (cf.
> 9f03ca915196dfc871804a1f8aad26207f601fd6). Of course that could vary
> by circumstance for a variety of reasons.
Be careful
On Mon, Jul 7, 2014 at 5:37 PM, Peter Geoghegan wrote:
> On Mon, Jul 7, 2014 at 12:57 PM, Robert Haas wrote:
>> 5. It's tempting to look at other ways of solving the parallel sort
>> problem that don't need an allocator - perhaps by simply packing all
>> the tuples into a DSM one after the next.
On Mon, Jul 7, 2014 at 12:57 PM, Robert Haas wrote:
> 5. It's tempting to look at other ways of solving the parallel sort
> problem that don't need an allocator - perhaps by simply packing all
> the tuples into a DSM one after the next. But this is not easy to do,
> or at least it's not easy to d
I wrote previously about my efforts to create a new memory allocator
(then called sb_alloc, now renamed to BlockAllocatorContext in my git
branch for consistency with existing identifiers) for PostgreSQL. To
make a long story short, many of the basic concepts behind that patch
still seem sound to
12 matches
Mail list logo