On 08/17/2018 06:14 AM, Joseph Myers wrote:
On Fri, 17 Aug 2018, Jeff Law wrote:

On 08/16/2018 05:01 PM, Joseph Myers wrote:
On Thu, 16 Aug 2018, Jeff Law wrote:

restores previous behavior.  The sprintf bits want to count element
sized chunks, which for wchars is 4 bytes (that count will then be

   /* Compute the range the argument's length can be in.  */
-  fmtresult slen = get_string_length (arg);
+  int count_by = dir.specifier == 'S' || dir.modifier == FMT_LEN_l ? 4 : 1;

I don't see how a hardcoded 4 is correct here.  Surely you need to example
wchar_type_node to determine its actual size for this target.
We did kick this around a little.  IIRC Martin didn't think that it was
worth handling the 2 byte wchar case.

Sorry, I think we may have miscommunicated -- I didn't think it
was useful to pass a size of the character type to the function.
I agree that passing in a hardcoded constant doesn't seem right
(even if GCC's wchar_t were always 4 bytes wide).

I'm still not sure I see the benefit of passing in the expected
element size given that the patch causes c_strlen() fail when
the actual element size doesn't match ELTSIZE.  If the caller
asks for the number of bytes in a const wchar_t array it should
get back the number bytes.  (I could see it fail if the caller
asked for the number of words in a char array whose size was
not evenly divisible by wordsize.)

Martin


There's a difference between explicitly not handling it and silently
passing a wrong value.

In theory something like WCHAR_TYPE_SIZE / BITS_PER_UNIT probably does
the trick.   I'm a bit leery of using that though.  We don't use it
anywhere else within GCC AFAICT.

WCHAR_TYPE_SIZE is wrong because it doesn't account for flag_short_wchar.
As far as I can see only ada/gcc-interface/targtyps.c uses WCHAR_TYPE_SIZE
now.  TYPE_PRECISION (wchar_type_node) / BITS_PER_UNIT is what should be
used.

Reply via email to