On 08/17/2018 06:14 AM, Joseph Myers wrote:
> On Fri, 17 Aug 2018, Jeff Law wrote:
> 
>> On 08/16/2018 05:01 PM, Joseph Myers wrote:
>>> On Thu, 16 Aug 2018, Jeff Law wrote:
>>>
>>>> restores previous behavior.  The sprintf bits want to count element
>>>> sized chunks, which for wchars is 4 bytes (that count will then be
>>>
>>>>    /* Compute the range the argument's length can be in.  */
>>>> -  fmtresult slen = get_string_length (arg);
>>>> +  int count_by = dir.specifier == 'S' || dir.modifier == FMT_LEN_l ? 4 : 
>>>> 1;
>>>
>>> I don't see how a hardcoded 4 is correct here.  Surely you need to example 
>>> wchar_type_node to determine its actual size for this target.
>> We did kick this around a little.  IIRC Martin didn't think that it was
>> worth handling the 2 byte wchar case.
> 
> There's a difference between explicitly not handling it and silently 
> passing a wrong value.
> 
>> In theory something like WCHAR_TYPE_SIZE / BITS_PER_UNIT probably does
>> the trick.   I'm a bit leery of using that though.  We don't use it
>> anywhere else within GCC AFAICT.
> 
> WCHAR_TYPE_SIZE is wrong because it doesn't account for flag_short_wchar.  
> As far as I can see only ada/gcc-interface/targtyps.c uses WCHAR_TYPE_SIZE 
> now.  TYPE_PRECISION (wchar_type_node) / BITS_PER_UNIT is what should be 
> used.
But that's specific to the c-family front-ends.

There's MODIFIED_WCHAR_TYPE which is ultimately used to build
wchar_type_node for the c-family front-ends.  Maybe we could construct
something from that.


jeff

Reply via email to