On Fri, Mar 2, 2018 at 05:21:28PM -0500, Tels wrote:
> Hello Robert,
>
> On Fri, March 2, 2018 12:22 pm, Robert Haas wrote:
> > On Wed, Feb 28, 2018 at 10:06 AM, Robert Haas
> > wrote:
> >> [ latest patches ]
> >
> > Committed. Thanks for the review.
>
> Cool :)
>
> There is a typo, tho:
>
>
Hello Robert,
On Fri, March 2, 2018 12:22 pm, Robert Haas wrote:
> On Wed, Feb 28, 2018 at 10:06 AM, Robert Haas
> wrote:
>> [ latest patches ]
>
> Committed. Thanks for the review.
Cool :)
There is a typo, tho:
+ /*
+* If the counterpary is known to have attached, we can read m
On Wed, Feb 28, 2018 at 10:06 AM, Robert Haas wrote:
> [ latest patches ]
Committed. Thanks for the review.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On Tue, Feb 27, 2018 at 4:06 PM, Andres Freund wrote:
>> OK, I'll try to check how feasible that would be.
>
> cool.
It's not too hard, but it doesn't really seem to help, so I'm inclined
to leave it alone. To make it work, you need to keep two separate
counters in the shm_mq_handle, one for the
Hi,
On 2018-02-27 16:03:17 -0500, Robert Haas wrote:
> On Wed, Feb 7, 2018 at 1:41 PM, Andres Freund wrote:
> > Well, it's more than just systems like that - for 64bit atomics we
> > sometimes do fall back to spinlock based atomics on 32bit systems, even
> > if they support 32 bit atomics.
>
> I
On Wed, Feb 7, 2018 at 1:41 PM, Andres Freund wrote:
> Well, it's more than just systems like that - for 64bit atomics we
> sometimes do fall back to spinlock based atomics on 32bit systems, even
> if they support 32 bit atomics.
I built with -m32 on my laptop and tried "select aid, count(*) from
Hi,
On 2018-01-25 12:09:23 -0500, Robert Haas wrote:
> > Perhaps a short benchmark for 32bit systems using shm_mq wouldn't hurt?
> > I suspect there won't be much of a performance impact, but it's probably
> > worth checking.
>
> I don't think I understand your concern here. If this is used on a
On Tue, Jan 9, 2018 at 7:09 PM, Andres Freund wrote:
>> + * mq_sender and mq_bytes_written can only be changed by the sender.
>> + * mq_receiver and mq_sender are protected by mq_mutex, although,
>> importantly,
>> + * they cannot change once set, and thus may be read without a lock once
>> this
Hi,
On 2017-12-04 10:50:53 -0500, Robert Haas wrote:
> Subject: [PATCH 1/2] shm-mq-less-spinlocks-v2
> + * mq_sender and mq_bytes_written can only be changed by the sender.
> + * mq_receiver and mq_sender are protected by mq_mutex, although,
> importantly,
> + * they cannot change once set, and
On Mon, Dec 4, 2017 at 9:20 PM, Robert Haas wrote:
> On Sun, Dec 3, 2017 at 10:30 PM, Amit Kapila
> wrote:
> > I thought there are some cases (though less) where we want to Shutdown
> > the nodes (ExecShutdownNode) earlier and release the resources sooner.
> > However, if you are not completely
On Sun, Dec 3, 2017 at 10:30 PM, Amit Kapila wrote:
> I thought there are some cases (though less) where we want to Shutdown
> the nodes (ExecShutdownNode) earlier and release the resources sooner.
> However, if you are not completely sure about this change, then we can
> leave it as it. Thanks f
On Fri, Dec 1, 2017 at 8:04 PM, Robert Haas wrote:
> On Sun, Nov 26, 2017 at 3:15 AM, Amit Kapila wrote:
>> Yeah and I think something like that can happen after your patch
>> because now the memory for tuples returned via TupleQueueReaderNext
>> will be allocated in ExecutorState and that can la
On Sun, Nov 26, 2017 at 3:15 AM, Amit Kapila wrote:
> Yeah and I think something like that can happen after your patch
> because now the memory for tuples returned via TupleQueueReaderNext
> will be allocated in ExecutorState and that can last for long. I
> think it is better to free memory, but
On Sun, Nov 26, 2017 at 5:15 PM, Amit Kapila wrote:
> Yeah and I think something like that can happen after your patch
> because now the memory for tuples returned via TupleQueueReaderNext
> will be allocated in ExecutorState and that can last for long. I
> think it is better to free memory, but
On Sat, Nov 25, 2017 at 9:13 PM, Robert Haas wrote:
> On Wed, Nov 22, 2017 at 8:36 AM, Amit Kapila wrote:
>>> remove-memory-leak-protection-v1.patch removes the memory leak
>>> protection that Tom installed upon discovering that the original
>>> version of tqueue.c leaked memory like crazy. I th
On Wed, Nov 22, 2017 at 8:36 AM, Amit Kapila wrote:
>> remove-memory-leak-protection-v1.patch removes the memory leak
>> protection that Tom installed upon discovering that the original
>> version of tqueue.c leaked memory like crazy. I think that it
>> shouldn't do that any more, courtesy of
>>
On Thu, Nov 16, 2017 at 12:24 AM, Andres Freund wrote:
> Hi,
>
> On 2017-11-15 13:48:18 -0500, Robert Haas wrote:
>> I think that we need a little bit deeper analysis here to draw any
>> firm conclusions.
>
> Indeed.
>
>
>> I suspect that one factor is that many of the queries actually send
>> ver
On Sat, Nov 18, 2017 at 7:23 PM, Amit Kapila wrote:
> On Fri, Nov 10, 2017 at 8:39 PM, Robert Haas wrote:
>> On Fri, Nov 10, 2017 at 5:44 AM, Amit Kapila wrote:
>>> I am seeing the assertion failure as below on executing the above
>>> mentioned Create statement:
>>
>
> - if (!ExecContextForcesOi
On Fri, Nov 10, 2017 at 8:39 PM, Robert Haas wrote:
> On Fri, Nov 10, 2017 at 5:44 AM, Amit Kapila wrote:
>> I am seeing the assertion failure as below on executing the above
>> mentioned Create statement:
>>
>> TRAP: FailedAssertion("!(!(tup->t_data->t_infomask & 0x0008))", File:
>> "heapam.c",
On Thu, Nov 16, 2017 at 10:23 AM, Ants Aasma wrote:
> For the Gather Merge driven by Parallel Index Scan case it seems to me
> that the correct queue size is one that can store two index pages
> worth of tuples. Additional space will always help buffer any
> performance variations, but there shoul
On Thu, Nov 16, 2017 at 6:42 AM, Robert Haas wrote:
> The problem here is that we have no idea how big the queue needs to
> be. The workers will always be happy to generate tuples faster than
> the leader can read them, if that's possible, but it will only
> sometimes help performance to let them
On Wed, Nov 15, 2017 at 9:34 PM, Amit Kapila wrote:
> The main advantage of local queue idea is that it won't consume any
> memory by default for running parallel queries. It would consume
> memory when required and accordingly help in speeding up those cases.
> However, increasing the size of sh
On Thu, Nov 16, 2017 at 12:18 AM, Robert Haas wrote:
> On Tue, Nov 14, 2017 at 7:31 AM, Rafia Sabih
> wrote:
> Similarly, I think that faster_gather_v3.patch is effectively here
> because it lets all the workers run at the same time, not because
> Gather gets any faster. The local queue is 100x
Hi,
On 2017-11-15 13:48:18 -0500, Robert Haas wrote:
> I think that we need a little bit deeper analysis here to draw any
> firm conclusions.
Indeed.
> I suspect that one factor is that many of the queries actually send
> very few rows through the Gather.
Yep. I kinda wonder if the same result
On Tue, Nov 14, 2017 at 7:31 AM, Rafia Sabih
wrote:
> Case 2: patches applied as in case 1 +
>a) increased PARALLEL_TUPLE_QUEUE_SIZE to 655360
> No significant change in performance in any query
>b) increased PARALLEL_TUPLE_QUEUE_SIZE to 65536 * 50
> Performance improved from 2
25 matches
Mail list logo