On Thu, Jan 15, 2015 at 5:02 AM, Tomas Vondra
wrote:
> Maybe we can try later again, but there's no poin in keeping this in the
> current CF.
>
> Any objections?
>
None, marked as rejected.
--
Michael
On 11.12.2014 23:46, Tomas Vondra wrote:
> On 11.12.2014 22:16, Robert Haas wrote:
>> On Thu, Dec 11, 2014 at 2:51 PM, Tomas Vondra wrote:
>>
>>> The idea was that if we could increase the load a bit (e.g. using 2
>>> tuples per bucket instead of 1), we will still use a single batch in
>>> some ca
On Fri, Dec 12, 2014 at 5:19 AM, Robert Haas wrote:
> Well, this is sort of one of the problems with work_mem. When we
> switch to a tape sort, or a tape-based materialize, we're probably far
> from out of memory. But trying to set work_mem to the amount of
> memory we have can easily result in
On Fri, Dec 12, 2014 at 4:54 PM, Tomas Vondra wrote:
Well, this is sort of one of the problems with work_mem. When we
switch to a tape sort, or a tape-based materialize, we're probably far
from out of memory. But trying to set work_mem to the amount of
memory we have can easi
On 12.12.2014 22:13, Robert Haas wrote:
> On Fri, Dec 12, 2014 at 11:50 AM, Tomas Vondra wrote:
>> On 12.12.2014 14:19, Robert Haas wrote:
>>> On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra wrote:
>>>
Regarding the "sufficiently small" - considering today's hardware, we're
probably talki
On Fri, Dec 12, 2014 at 11:50 AM, Tomas Vondra wrote:
> On 12.12.2014 14:19, Robert Haas wrote:
>> On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra wrote:
>>
>>> Regarding the "sufficiently small" - considering today's hardware, we're
>>> probably talking about gigabytes. On machines with significan
On 12.12.2014 14:19, Robert Haas wrote:
> On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra wrote:
>
>> Regarding the "sufficiently small" - considering today's hardware, we're
>> probably talking about gigabytes. On machines with significant memory
>> pressure (forcing the temporary files to disk), i
On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra wrote:
>>> The idea was that if we could increase the load a bit (e.g. using 2
>>> tuples per bucket instead of 1), we will still use a single batch in
>>> some cases (when we miss the work_mem threshold by just a bit). The
>>> lookups will be slower,
On 11.12.2014 22:16, Robert Haas wrote:
> On Thu, Dec 11, 2014 at 2:51 PM, Tomas Vondra wrote:
>> No, it's not rescanned. It's scanned only once (for the batch #0), and
>> tuples belonging to the other batches are stored in files. If the number
>> of batches needs to be increased (e.g. because of
On Thu, Dec 11, 2014 at 2:51 PM, Tomas Vondra wrote:
> No, it's not rescanned. It's scanned only once (for the batch #0), and
> tuples belonging to the other batches are stored in files. If the number
> of batches needs to be increased (e.g. because of incorrect estimate of
> the inner table), the
On 11.12.2014 20:00, Robert Haas wrote:
> On Thu, Dec 11, 2014 at 12:29 PM, Kevin Grittner wrote:
>>
>> Under what conditions do you see the inner side get loaded into the
>> hash table multiple times?
>
> Huh, interesting. I guess I was thinking that the inner side got
> rescanned for each new
On Thu, Dec 11, 2014 at 12:29 PM, Kevin Grittner wrote:
> Robert Haas wrote:
>> On Sat, Dec 6, 2014 at 10:08 PM, Tomas Vondra wrote:
>>>select a.i, b.i from a join b on (a.i = b.i);
>>
>> I think the concern is that the inner side might be something more
>> elaborate than a plain table scan,
Robert Haas wrote:
> On Sat, Dec 6, 2014 at 10:08 PM, Tomas Vondra wrote:
>>select a.i, b.i from a join b on (a.i = b.i);
>
> I think the concern is that the inner side might be something more
> elaborate than a plain table scan, like an aggregate or join. I might
> be all wet, but my impres
On Sat, Dec 6, 2014 at 10:08 PM, Tomas Vondra wrote:
> select a.i, b.i from a join b on (a.i = b.i);
I think the concern is that the inner side might be something more
elaborate than a plain table scan, like an aggregate or join. I might
be all wet, but my impression is that you can make res
Tomas Vondra wrote:
> back when we were discussing the hashjoin patches (now committed),
> Robert proposed that maybe it'd be a good idea to sometimes increase the
> number of tuples per bucket instead of batching.
>
> That is, while initially sizing the hash table - if the hash table with
> enou
Hi,
back when we were discussing the hashjoin patches (now committed),
Robert proposed that maybe it'd be a good idea to sometimes increase the
number of tuples per bucket instead of batching.
That is, while initially sizing the hash table - if the hash table with
enough buckets to satisfy NTUP_P
16 matches
Mail list logo