On Sun, Sep 10, 2017 at 10:42 AM Merlin Moncure <mmonc...@gmail.com> wrote:

> On Friday, September 8, 2017, John Turner <fenwayri...@gmail.com> wrote:
>
>>
>>
>> On Fri, Sep 8, 2017 at 6:57 AM Tom Lane <t...@sss.pgh.pa.us> wrote:
>>
>>> Ron Johnson <ron.l.john...@cox.net> writes:
>>> > Based on LENGTH(offending_column), none of the values are more than 144
>>> > bytes in this 44.2M row table.  Even though VARCHAR is, by definition,
>>> > variable length, are there any internal design issues which would make
>>> > things more efficient if it were dropped to, for example, VARCHAR(256)?
>>>
>>> No.
>>>
>>> So the declarative column length has no bearing on memory grants during
>> plan generation/execution?
>>
>
> Nope.  Memory usage is proportional to the size of the string, not the
> maximum length for varchar.  Maximum length is a constraint.
>
> Ok, thanks for verifying.  I was curious since other platforms seem to
handle this aspect of memory allocation differently (more crudely, perhaps)
based on estimation of how fully populated the column _might_ be given a
size constraint:
https://sqlperformance.com/2017/06/sql-plan/performance-myths-oversizing-strings

John

Reply via email to