https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103521

--- Comment #4 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by David Malcolm <dmalc...@gcc.gnu.org>:

https://gcc.gnu.org/g:458ad38ce2bbec85016d88757ec6a35d2c393e2c

commit r12-7494-g458ad38ce2bbec85016d88757ec6a35d2c393e2c
Author: David Malcolm <dmalc...@redhat.com>
Date:   Fri Mar 4 13:51:14 2022 -0500

    analyzer: reduce svalue depth limit from 13 to 12 [PR103521]

    PR analyzer/103521 reports that commit
r12-5585-g132902177138c09803d639e12b1daebf2b9edddc
    ("analyzer: further false leak fixes due to overzealous state merging
[PR103217]")
    led to failures of gcc.dg/analyzer/pr93032-mztools.c on some targets,
    where rather than reporting FILE * leaks, the analyzer would hit
    complexity limits and give up.

    The cause is that pr93032-mztools.c has some 'unsigned char' values that
    are copied to 'char'.  On targets where 'char' defaults to being signed,
    this leads to casts, whereas on targets where 'char' defaults to being
    unsigned, no casts are needed.

    When the casts occur, various symbolic values within the loop (the
    locals 'crc', 'cpsize', and 'uncpsize') become sufficiently complex as
    to hit the --param=analyzer-max-svalue-depth= limit, and are treated as
    UNKNOWN, allowing the analysis of the loop to quickly terminate, with
    much of this state as UNKNOWN (but retaining the FILE * information, and
    thus correctly reporting the FILE * leaks).

    Without the casts, the symbolic values for these variables don't quite
    hit the complexity limit, and the analyzer attempts to track these
    values in the loop, leading to the analyzer eventually hitting the
    per-program-point limit on the number of states, and giving up on
    these execution paths, thus failing to report the FILE * leaks.

    This patch tweaks the default value of the param:
      --param=analyzer-max-svalue-depth=.
    from 13 down to 12.  This allows the pr93032-mztools.c testcase to
    succeeed with both -fsigned-char and -funsigned-char, and thus allows
    this integration test to succeed on both styles of target without
    requiring extra command-line flags.  The patch duplicates the test so
    it runs with both -fsigned-char and -funsigned-char.

    My hope is that this will allow similar cases to terminate loop analysis
    earlier.  I tried reducing it further, but doing so caused some test
    cases to regress.

    The tradeoff here is between:
    (a) precision of individual states in the analysis, versus
    (b) maximizing code-path coverage in the analysis

    I can imagine a more nuanced approach that splits the current
    per-program-point hard limit into soft and hard limits: on hitting the
    soft limit at a program point, go into a less precise mode for states
    at that program point, in the hope that we can fully explore execution
    paths beyond it without hitting the hard limit, but this seems like
    GCC 13 material.

    Another possible future fix might be for the analysis plan to make an
    attempt to prioritize parts of the code in an enode budget, rather than
    setting the same hard limit uniformly across all program points.

    gcc/analyzer/ChangeLog:
            PR analyzer/103521
            * analyzer.opt (-param=analyzer-max-svalue-depth=): Reduce from 13
            to 12.

    gcc/testsuite/ChangeLog:
            PR analyzer/103521
            * gcc.dg/analyzer/pr93032-mztools.c: Move to...
            * gcc.dg/analyzer/pr93032-mztools-signed-char.c: ...this, adding
            -fsigned-char to args, and...
            * gcc.dg/analyzer/pr93032-mztools-unsigned-char.c: ...copy to here,
            adding -funsigned-char to args.

    Signed-off-by: David Malcolm <dmalc...@redhat.com>

Reply via email to