Jose Kahan wrote:
> I am a contributor to an mail hypertext archiving system called
> hypermail [1], which is written in C.
> 
> Recently a bug was raised that one of its parsers had problems
> when the input string had a nbsp. As you may imagine from my subject,
> this is because that parser uses sscanf and the nbsp corresponds
> to UTF-8 U+00A0 character:
> 
>   urlscan = sscanf(inputp, "%255[^] )<>\"\'\n[\t\\]", urlbuff);
> 
> o you know if there's an sscanf function that is UTF-8 aware?

According to POSIX [1], the %l[ directive parses multibyte characters.
If you set the locale to a UTF-8 locale - such as through
  setlocale (LC_CTYPE, "en_US.UTF-8");
- you should be able to achieve this, at least on glibc systems.

However, this is complex code, and I doubt all platforms get this
right correctly. We found 20 bugs in *printf implementations on
various platforms. I wouldn't be surprised if there were 10 bugs
in *scanf implementations, and this part is among the hairiest in
sscanf.

> - Convert the input string to wchar and use swscanf instead.

This is what I would suggest, because
  - swscanf is portable enough [2].
  - Parsing sequences of wide-characters in a wide-character string is
    more likely to be correctly implemented everywhere.

> seeing if we can replace the sscanf eventually by regexps.

Anyone has experience with this?

Bruno

[1] https://pubs.opengroup.org/onlinepubs/9699919799/functions/fscanf.html
[2] https://www.gnu.org/software/gnulib/manual/html_node/swscanf.html


Reply via email to