Em qui., 17 de fev. de 2022 às 05:25, Kyotaro Horiguchi <
horikyota....@gmail.com> escreveu:

> At Thu, 17 Feb 2022 15:50:09 +0800, Julien Rouhaud <rjuju...@gmail.com>
> wrote in
> > On Thu, Feb 17, 2022 at 03:51:26PM +0900, Kyotaro Horiguchi wrote:
> > > So, the function doesn't return 63 for all registered names and wrong
> > > names.
> > >
> > > So other possibilities I can think of are..
> > > - Someone had broken pg_encname_tbl[]
> > > - Cosmic ray hit, or ill memory cell.
> > > - Coverity worked wrong way.
> > >
> > > Could you show the workload for the Coverity warning here?
> >
> > The 63 upthread was hypothetical right?  pg_encoding_max_length()
> shouldn't be
>
> I understand that Coverity complaind pg_verify_mbstr_len is fed with
> encoding = 63 by length_in_encoding.  I don't know what made Coverity
> think so.
>
I think I found the reason.


>
> > called with user-dependent data (unlike pg_encoding_max_length_sql()),
> so I
> > also don't see any value spending cycles in release builds.  The error
> should
> > only happen with bogus code, and assert builds are there to avoid that,
> or
> > corrupted memory and in that case we can't make any promise.
>
> Well, It's more or less what I wanted to say. Thanks.
>
One thing about this thread that may go unnoticed and
that the analysis is done in Windows compilation.

If we're talking about consistency, then the current implementation of
pg_encoding_max_length is
completely inconsistent with the rest of the file's functions, even if it's
to save a few cycles, this is bad practice.

int
pg_encoding_max_length(int encoding)
{
return (PG_VALID_ENCODING(encoding) ?
pg_wchar_table[encoding].maxmblen :
pg_wchar_table[PG_SQL_ASCII].maxmblen);



>
> regards.
>
> --
> Kyotaro Horiguchi
> NTT Open Source Software Center
>

Reply via email to