makefile dependencies
Hi all, I'm wondering if there is any guidance regarding the list of dependencies to include for object files. For example, I see this list of dependencies for aarch64-builtins.o in aarch64/t-aarch64: $(srcdir)/config/aarch64/aarch64-builtins.c $(CONFIG_H) \ $(SYSTEM_H) coretypes.h $(TM_H) \ $(RTL_H) $(TREE_H) expr.h $(TM_P_H) $(RECOG_H) langhooks.h \ $(DIAGNOSTIC_CORE_H) $(OPTABS_H) \ $(srcdir)/config/aarch64/aarch64-simd-builtins.def \ $(srcdir)/config/aarch64/aarch64-simd-builtin-types.def \ aarch64-builtin-iterators.h But I can see that aarch64-builtins.c also includes headers not listed here. (function.h, basic-block.h, for example) Are we supposed to manually keep track of the transitive header dependencies and keep the makefiles up to date, or is there some sort of automatic dependency management that is happening on our behalf? Something along the lines of some of the answers provided here: https://stackoverflow.com/questions/2394609/makefile-header-dependencies Thanks, Andrew
RE: makefile dependencies
Thanks, Joseph. This is exactly what I was looking for. -Andrew -Original Message- From: gcc-ow...@gcc.gnu.org On Behalf Of Joseph Myers Sent: Tuesday, October 8, 2019 3:57 PM To: Andrew Dean Cc: gcc@gcc.gnu.org Subject: Re: makefile dependencies On Tue, 8 Oct 2019, Andrew Dean via gcc wrote: > But I can see that aarch64-builtins.c also includes headers not listed > here. (function.h, basic-block.h, for example) Are we supposed to > manually keep track of the transitive header dependencies and keep the > makefiles up to date, or is there some sort of automatic dependency > management that is happening on our behalf? Something along the lines > of some of the answers provided here: > https://stackoverflow.com/questions/2394609/makefile-header-dependencies The automatic dependency patch series <https://gcc.gnu.org/ml/gcc-patches/2013-09/msg01662.html> did not generally address target-specific files (though it updated t-i386 as an example); see <https://gcc.gnu.org/ml/gcc-patches/2013-07/msg01218.html> for more discussion of what it did not include. It is quite possible that some places are specifying dependencies manually unnecessarily. -- Joseph S. Myers jos...@codesourcery.com
GCC selftest improvements
TLDR: I'd like to propose adding a dependency on a modern unit testing framework to make it easier to write unit tests within GCC. Before I spend much more time on it, what sort of buy-in should I get? Are there any people in particular I should work more closely with as I make this change? Terminology: Within GCC, there are two types of tests in place: unit tests and regression tests. The unit tests have been written with a home-grown selftest framework and run as part of the build process. Any failures to a unit test results in no compiler being produced. The regression tests, on the other hand, run after build, and use the separate DejaGnu framework. In this email, I am only concerning myself with the unit tests, and throughout the remainder of the email, any mention of tests refers to these. Working on GCC, I wanted to add some new unit tests to my feature as I went, but I noticed that there is a good deal of friction involved. Right now, adding new unit tests requires writing the test method, then modifying a second place in the code to call said test method, repeating as necessary until getting all the way to either the selftest.c file or the target hook. There is also no way to do test setup/teardown automatically. Everything is manual. I'd like to propose adding a dependency on a modern open-source unit testing framework as an enhancement to the current self test system. I have used Catch2 (https://github.com/catchorg/Catch2, Boost Software License 1.0) with great success in the past. I experimented with adding it to GCC and converting a handful of tests to use Catch2. Although I only converted a small number of tests, I didn't see any performance impact during selftest. As a bonus, while doing so, I actually found that one test that I had written previously wasn't actually being run, because I had failed to manually call it. Some nice things that Catch2 provides are better error reporting (see below for a comparison), ease of adding new tests (just include the header and write a TEST_CASE(), as opposed to the manual plumbing required right now), extension points for adding custom comparisons (I could see this being very useful to expand on the current rtl test macros), and the ability to run a subset of the tests without recompiling. It is also easy to integrate Catch2 with the existing self-test framework. If this path seems useful to others, I'm happy to pursue it further. A list of work items I see are: 1. Convert more tests to verify the claim that build performance is not degraded 2. Update the docs to list Catch2 as the new recommended way to write unit tests 3. If all of the target self-tests are converted, then we can remove the target test hook. Similar for the lang test hook. One thing that would make Catch2 an even more slam-dunk case was if we were able to enable exceptions for the check builds. Then, running the unit tests could report multiple failures at the same time instead of just aborting at the first one. That said, even without enabling exceptions, Catch2 is on par with the current selftest in terms of terminating at the first failure. Another option is to use a test framework that doesn't use exceptions, such as Google Test (https://github.com/google/googletest, BSD 3-Clause "New" or "Revised" License). I personally think Catch2 is more flexible, or I would lead with Google Test. For example, in Catch2, shared setup is done in place with the tests itself, having each subtest be a nested SECTION, where-as in GTest, you have to write a test class that derives from ::test and overrides SetUp(). In addition, the sections in Catch2 can be nested further, allowing several related tests to build on each other. Here is some sample output for the case where all the tests are passing: === All tests passed (25 assertions in 5 test cases) And here is the output when a test fails: ~~~ is a Catch v2.9.2 host application. Run with -? for options --- test_set_range --- ../../gcc/bitmap.c:2661 ... ../../gcc/bitmap.c:2668: FAILED: REQUIRE( 6 == bitmap_count_bits (b) ) with expansion: 6 == 5 Catch will terminate because it needed to throw an exception. The message was: Test failure requires aborting test! terminate called without an active exception ../../gcc/bitmap.c:2668: FAILED: {Unknown expression after the reported line} due to a fatal error condition: SIGABRT - Abort (abnormal termination) signal === test cases: 2 | 1 passed | 1 failed assertions: 5 | 3 passed | 2 faile
RE: GCC selftest improvements
> From: David Malcolm > Sent: Thursday, October 24, 2019 11:18 PM > On Thu, 2019-10-24 at 20:50 +0000, Andrew Dean via gcc wrote: > Thanks for your email, it looks interesting. Is your code somewhere we can > see > it? It can be -- what is the preferred way to share the code? Though to be honest I can summarize the changes pretty quickly: 1. Add catch.hpp (the single include header from the Catch2 project) and a small wrapper header around catch.hpp that temporarily fixes up some macros that GCC defines to replace c library functions and does nothing if !CHECKING_P 2. Modify test methods like so: - void test_this_thing () + TEST_CASE("test this thing") And - ASSERT_EQ (a, b); + REQUIRE (a == b); 3. Invoke the Catch2 test runner during selftest like: Catch::Session ().run (); 4. Remove the manual invocations of the test methods, as the TEST_CASE macro takes care of self-registration. > I think the consensus during review was that I was over-engineering things, > and > that each iteration from v1 to v8 made the code simpler and involved less C++ > "magic", closer to C. Whether that's still the consensus I don't know. > Different > people within the GCC dev community have different comfort levels with C++, > and my initial version (using > gtest) was probably too "magic" for some. Maybe people here are more > comfortable with C++ now? Here's hoping! Looks like you had a very similar starting point as what I suggested here. > GCC has some rather unique requirements, in that we support a great many > build configurations, some of which are rather primitive - for example, > requiring just C++98 with exceptions disabled, in that we want to be able to > be > bootstrappable on relatively "ancient" configurations. > IIRC auto-registration of tests requires that the build configuration have a > sufficiently sane implementation of C++ - having globals with non-trivial > ctors > tends to be problematic when dealing with early implementations of C++. Is C++98 the limit of what we can use in GCC? If so, that immediately eliminates Catchv1 (C++03), Catch2 (C++11+) and GTest (C++11) > Personally I don't find the manual registration of tests to be a pain, but it > would > be nice to have more readable errors on failures. There's probably a case for > more verbose test output. (generally I immediately just do "make > selftest-gdb" > on failures; the issue is if it suddenly starts failing on a build I don't > have access > to) I didn't know about selftest-gdb. That will come in handy. My ideal programming style, is to write a new test, watch it fail for the expected reason, then write the production code to make it pass. Having to attach a debugger to validate/investiigate failures slows down the process, as does having to write the additional code to invoke the new test methods, if only by a little bit. > I suspect that exceptions would be a deal-breaker; does Catch2 support -fno- > exceptions? Yes, Catch2 supports -fno-exceptions, though not like GTest, which was built to not use exceptions at all. Catch2 stops running tests at the first failure and gives the output as shown in the original email. > As for setup/teardown, I've been able to do that "manually" using RAII- style > classes in test functions. Yes, I have added some RAII classes to assist in testing as well. I just think it will be even better if it were easier to do.
How to properly build and run testsuite?
I'm curious what other people are doing, because I'm never able to match the results that get reported to the test-results list. I created a brand new virtual machine running Ubuntu 18.04 (x86_64), installed the prereqs as listed here: https://gcc.gnu.org/install/prerequisites.html, created the repo following the "Getting Started - Read Only" instructions listed here: https://gcc.gnu.org/wiki/GitMirror, then ran these commands from my build folder. configure --disable-multilib --prefix=/home/adean/install make make check -k As an example, the gcc summary for me (10.0.0 20191120) shows # of unexpected failures85 # of unexpected successes 35 Whereas the most recent reported results (10.0.0 20191118) show only 2 unexpected failures and no unexpected successes in the gcc summary. Is it really just because I'm two days newer that ~120 regressions entered the picture (unlikely) or am I doing something wrong on my machine? Thanks, Andrew
RE: How to properly build and run testsuite?
> > Whereas the most recent reported results (10.0.0 20191118) show only 2 > unexpected failures and no unexpected successes in the gcc summary. > > Which results are you looking at? > Two failures sounds very low, it's probably not running the guality tests > which > usually fail. > I searched the mailing list for x86_64-pc-linux-gnu to make sure I was comparing apples to apples, and this was the most recent report: https://gcc.gnu.org/ml/gcc-testresults/2019-11/msg01190.html
RE: How to properly build and run testsuite?
> > > > Whereas the most recent reported results (10.0.0 20191118) show > > > > only 2 > > > unexpected failures and no unexpected successes in the gcc summary. > > > > > > Which results are you looking at? > > > Two failures sounds very low, it's probably not running the guality > > > tests which usually fail. > > > > > I searched the mailing list for x86_64-pc-linux-gnu to make sure I was > > comparing apples to apples, and this was the most recent report: > > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgcc. > > gnu.org%2Fml%2Fgcc-testresults%2F2019- > 11%2Fmsg01190.html&data=02%7 > > > C01%7CAndrew.Dean%40microsoft.com%7Cb89abb6ea9f34a0916d908d76ed0 > 3783%7 > > > C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637099712871926073 > &sda > > > ta=KooYFswCjKMaV%2FQ3D6tc03WAej8MfQ4Zl9H9kjtR%2B6Y%3D&rese > rved=0 > > Yes, I thought you might be looking at that one. That has: > # of unsupported tests 7126 > which seems high. Here's a more typical set of results: > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgcc.gnu > .org%2Fml%2Fgcc-testresults%2F2019- > 11%2Fmsg01255.html&data=02%7C01%7CAndrew.Dean%40microsoft.co > m%7Cb89abb6ea9f34a0916d908d76ed03783%7C72f988bf86f141af91ab2d7c > d011db47%7C1%7C0%7C637099712871926073&sdata=ujcoGkSL45wodz > A50F7NvlyHPGzt8MKnTQbUzye3JIw%3D&reserved=0 > > The one you looked at is built on an EC2 instance, so might be missing > something (GDB?) needed for the guality tests. > > So I don't think you're doing anything wrong, you just got unlucky and looked > at > one which skips most of the tests that are failing for you. Thanks for your help!
RE: GCC selftest improvements
> > Many systems do not have a system compiler newer than this *four years > > old* one. GCC 4.8 is the first GCC version that supports all of > > C++11, which is the only reason it would be even near acceptable to > > require something this *new*. > > Agreed. Note we're even shipping new service packs for SLE12 which has that > "ancient" compiler version (OTOH there _is_ a fully supported GCC 9 available > for SLE12 as well). > > So, if we want C++11 then fine. But requiring GCC 9+ isn't going to fly. > IIRC > GCC 6 is first having -std=c++14 by default, but unless there's a compelling > reason to use C++14 in GCC I'd rather not do it at this point. > > Removing all the workarounds in the tree we have for GCC 4.[12].x would of > course be nice. > > But I have to update the testers that still use GCC 4.1.x as host compiler :P > > Richard. > > > > > Segher Richard/Segher: Are we in agreement that we can move forward with updating to c++11 as the minimum version? I have made the simple change locally to modify the flag and verified that I got the exact same test results with/without the change. I can look into the work to add a configuration warning if the compiler doesn't support c++11, but wanted to make sure we are on the same page before doing so.
How to test aarch64 when building a cross-compiler?
Based on https://www.gnu.org/software/hurd/hurd/glibc.html, I'm using glibc/scripts/build-many-glibcs.py targeting aarch64-linux-gnu as so: build-many-glibcs.py build_dir checkout --keep all build-many-glibcs.py build_dir host-libraries --keep all -j 12 build-many-glibcs.py build_dir compilers aarch64-linux-gnu --keep all -j 12 --full-gcc build-many-glibcs.py build_dir glibcs aarch64-linux-gnu --keep all -j 12 This completes successfully. However, when I then try to run the gcc tests like so: runtest --outdir . --tool gcc --srcdir /path/to/gcc/gcc/testsuite aarch64.exp --target aarch64-linux-gnu --target_board aarch64-sim --tool_exec /path_to/build_dir/install/compilers/aarch64-linux-gnu/bin/aarch64-glibc-linux-gnu-gcc --verbose -v I get errors like this: aarch64-glibc-linux-gnu-gcc: fatal error: cannot read spec file 'rdimon.specs': No such file or directory I can see that the rdimon.specs flag is added based on this line in aarch64-sim.exp: set_board_info ldflags "[libgloss_link_flags] [newlib_link_flags] -specs=rdimon.specs" I've tried searching for how to address this, but so far unsuccessfully. Does anybody know what I'm missing here? Thanks, Andrew
RE: [EXTERNAL] Re: How to test aarch64 when building a cross-compiler?
> > This completes successfully. However, when I then try to run the gcc tests > > like > so: > > runtest --outdir . --tool gcc --srcdir /path/to/gcc/gcc/testsuite > > aarch64.exp --target aarch64-linux-gnu --target_board aarch64-sim > > --tool_exec > > /path_to/build_dir/install/compilers/aarch64-linux-gnu/bin/aarch64-gli > > bc-linux-gnu-gcc --verbose -v > > > > I get errors like this: > > > > aarch64-glibc-linux-gnu-gcc: fatal error: cannot read spec file > > 'rdimon.specs': No such file or directory > > > > I can see that the rdimon.specs flag is added based on this line in aarch64- > sim.exp: > > Where does aarch64-sim.exp comes from? /usr/share/dejagnu/baseboards/aarch64-sim.exp > > > > > set_board_info ldflags "[libgloss_link_flags] [newlib_link_flags] - > specs=rdimon.specs" > > > I think this is for baremetal/newlib targets, ie. aarch64-elf, not for > aarch64- > linux-gnu. Hmm.. build-many-glibcs.py doesn't like either aarch64-elf or aarch64-linux-elf... I get a KeyError in build_compilers and build_glibcs when it tries to look up the config with either of those values.
RE: [EXTERNAL] Re: How to test aarch64 when building a cross-compiler?
> > >>> I get errors like this: > > >>> > > >>> aarch64-glibc-linux-gnu-gcc: fatal error: cannot read spec file > > >>> 'rdimon.specs': No such file or directory > > >>> > > >>> I can see that the rdimon.specs flag is added based on this line > > >>> in aarch64- > > >> sim.exp: > > >> > > >> Where does aarch64-sim.exp comes from? > > > > > > /usr/share/dejagnu/baseboards/aarch64-sim.exp > > > > > >> > > >>> > > >>> set_board_info ldflags "[libgloss_link_flags] [newlib_link_flags] > > >>> - > > >> specs=rdimon.specs" > > >>> > > >> I think this is for baremetal/newlib targets, ie. aarch64-elf, not > > >> for aarch64- linux-gnu. > > > > > Yes -specs=rdimon.specs and other such flags are for use only on bare-metal > targets. > > > > Hmm.. build-many-glibcs.py doesn't like either aarch64-elf or > > > aarch64-linux- > elf... > > > I get a KeyError in build_compilers and build_glibcs when it tries to > > > look up > the config with either of those values. > > > > > > > Unfortunately the build-many-glibcs.py does not have support for > > baremetal build yet (since it is a tool created to build > > cross-compiling toolchain using glibc). > > And glibc doesn't work bare-metal .. > > regards > Ramana I guess that means that the dejagnu baseboard "aarch64-sim" is only meant to do bare-metal testing? How would one build/test GCC hosted on x86_64 and targeting aarch64 then? Is there a different simulator approach I should be using?
RE: [EXTERNAL] Re: How to test aarch64 when building a cross-compiler?
> On 11/25/19 2:43 PM, Andrew Dean via gcc wrote: > >>>>>> I get errors like this: > >>>>>> > >>>>>> aarch64-glibc-linux-gnu-gcc: fatal error: cannot read spec file > >>>>>> 'rdimon.specs': No such file or directory > >>>>>> > >>>>>> I can see that the rdimon.specs flag is added based on this line > >>>>>> in aarch64- > >>>>> sim.exp: > >>>>> > >>>>> Where does aarch64-sim.exp comes from? > >>>> > >>>> /usr/share/dejagnu/baseboards/aarch64-sim.exp > >>>> > >>>>> > >>>>>> > >>>>>> set_board_info ldflags "[libgloss_link_flags] > >>>>>> [newlib_link_flags] > >>>>>> - > >>>>> specs=rdimon.specs" > >>>>>> > >>>>> I think this is for baremetal/newlib targets, ie. aarch64-elf, not > >>>>> for aarch64- linux-gnu. > >>>> > >> > >> Yes -specs=rdimon.specs and other such flags are for use only on > >> bare-metal targets. > >> > >>>> Hmm.. build-many-glibcs.py doesn't like either aarch64-elf or > >>>> aarch64-linux- > >> elf... > >>>> I get a KeyError in build_compilers and build_glibcs when it tries > >>>> to look up > >> the config with either of those values. > >>>> > >>> > >>> Unfortunately the build-many-glibcs.py does not have support for > >>> baremetal build yet (since it is a tool created to build > >>> cross-compiling toolchain using glibc). > >> > >> And glibc doesn't work bare-metal .. > >> > >> regards > >> Ramana > > I guess that means that the dejagnu baseboard "aarch64-sim" is only meant > to do bare-metal testing? How would one build/test GCC hosted on x86_64 and > targeting aarch64 then? Is there a different simulator approach I should be > using? > I've used qemu for this kind of testing. In my environment I have root > filesystems with native binaries/libraries. I can just chroot into those > filesystems and qemu handles everything. > > In theory one wouldn't even need to chroot into the filesystems if you set the > library paths right. > > jeff Thanks, Jeff. qemu did the trick. Specifically, I did the following: 1. sudo apt-get install qemu-user-static 2. export LD_LIBRARY_PATH=${BuildRoot}/install/glibcs/aarch64-linux-gnu/lib64:${BuildRoot}/install/compilers/aarch64-linux-gnu/aarch64-glibc-linux-gnu/lib64 3. sudo ln -s ${BuildRoot}/install/glibcs/aarch64-linux-gnu/lib/ld-linux-aarch64.so.1 /lib/ld-linux-aarch64.so.1 4. Run the gcc tests as previously described 5. Remove the symlink 6. Restore the previous value of LD_LIBRARY_PATH There are still a few test failures that we will need to investigate, but this was a huge leap forward.