Re: gcc-cvs mails for personal/vendor branches for merge commits
Joseph Myers wrote: > On Wed, 15 Jan 2020, Jason Merrill wrote: > >> On 1/15/20 9:56 AM, Joseph Myers wrote: >>> On Wed, 15 Jan 2020, Jakub Jelinek wrote: >>> Or, if that is not possible, disable gcc-cvs mail for vendor and private branches altogether? >> >> I think this is desirable. gcc-cvs should only mail about changes to master >> and release branches. > > I thinks commit mails for changes to all branches are desirable (including > refs/heads/devel/ branches, including user and vendor branches) - but they > should only be for changes that are new to the repository. Existing > practice in SVN was that all branches generated mails, we simply didn't > have so many branches. For user/vendor branches that are rebased (a desirable action), is there any mechanism for excluding unchanged rebased commits from being seen as “new”? I’m guessing that public development branches will probably gravitate to the no non-FF mode, if they are to be used by people other than the primary author .. although that does somewhat limit things; rebasing WIP onto trunk and reorganising / squashing is useful as well. Iain
Re: gcc-cvs mails for personal/vendor branches for merge commits
Joel Brobecker wrote: I think it's desirable for development that *happens on* the personal and vendor branches to be visible in gcc-cvs - that is different from things getting merged into them. Likewise for the refs/heads/devel/* development branches - non-fast-forward pushes are not permitted there, but such branches can expect to have lots of merges from master, and it's the actual development taking place *on* the branches - the new commits - that is of interest to see on gcc-cvs, not the merging of existing commits. Would it be sufficient to say that some branches would only trigger a summary email, but not individual commit emails? The downside is that you would not be getting the "diff" for commits that are really completely new. But on the other hand, it would fit better with the fact that user branches could have frequent re-basing, thus causing the same commit email being sent over and over at each rebase operation. It would also answer the issue of the number of emails being sent when people are doing a merge which brings in more commits than the max-emails number. AFAIU, we have access to more fine-grained information; isn’t it possible to differentiate “new” commits, from ‘merges’ and from ‘rebases’? (because a ‘new’ commit does not have the extra fields set up for merges and rebases). For example, I’d like to know that user/fred has rebased the branch i’m interested in but OTOH, would not find the per-commit mails useful (so a summary there is good). If a push contains a combination of things - new work, merged and rebased commits - then there would have to be some way to react to split or aggregate the messages / diffs per commit id. Iain
Re: gcc-cvs mails for personal/vendor branches for merge commits
Joel Brobecker wrote: AFAIU, we have access to more fine-grained information; isn’t it possible to differentiate “new” commits, from ‘merges’ and from ‘rebases’? (because a ‘new’ commit does not have the extra fields set up for merges and rebases). In my opinion, this would create a lot of complication for the benefits being gained. I also think that the more variations of behaviors you introduce, the harder is becomes for people to know what's right and what's not expected. People then start getting surprised and start asking about it. At best, it's just a quick answer, but in some cases, it takes time to remember why we set things up the way they are and why it doesn't make sense to change it. Over the years, I have really learnt to enjoy the benefits of consistency, even if it is means some areas are suboptimal. The "suboptimality" can still be a better compromise overall than a superengineered system. Spamming the list with emails every time someone merges master to their development branch sounds highly suboptimal, and likely to lead to disabling email entirely for those branches. Is it so complicated to send a single email for a merge commit or non-fast-forward push? Well, no. I was going so say that this is what I have been proposing all along, except the way you phrased your suggestion above makes me think that perhaps you want something more automatic, where the hooks decide dynamically, rather than the choice being made by configuration. So it's not exactly the same, but quite similar in spirit. I think we can find ways that will satisfy the need for fewer emails without having to have that extra logic, though. Also, you guys have to understand that you are all coming to me from multiple directions at the same time, and making requests that are not always easy to reconcile. I do completely understand that getting hundreds of emails because of a merge into a development branch is far from optimal, and it's absolutely not what I am suggesting we do here. In fact, you'll see that I told Joseph in a separate mail that I will think this over and try to come up with something that answers the situation he described. What I am alerting people to is trying to have special-case handling for every scenario we can conceive. for my part, I was not trying to specify “requirements” - but identify different scenarios that we might want to handle (better to decide how we want to do things at the start, than to create changes later). As a general rule, I’m 100% in favour of KISS. I'm wondering if we wouldn't be better off having this discussion live over a meeting or a series of meetings… could be, Iain
Re: GCC 8.4 Release Candidate available from gcc.gnu.org
Jakub Jelinek wrote: The first release candidate for GCC 8.4 is available from https://gcc.gnu.org/pub/gcc/snapshots/8.4.0-RC-20200226/ ftp://gcc.gnu.org/pub/gcc/snapshots/8.4.0-RC-20200226/ and shortly its mirrors. It has been generated from git commit r8-10091-gf80c40f93f9e8781b14f1a8301467f117fd24051. I have built and tested this revision on i686-darwin9,10 powerpc-darwin9 x86_64-darwin10 .. 19 test results look nominal - despite recent work there’s still quite a way to go especially with the sanitizer output. thanks Iain I have so far bootstrapped and tested the release candidate on x86_64-linux and i686-linux. Please test it and report any issues to bugzilla. If all goes well, I'd like to release 8.4 on Wednesday, March 4th.
Successful bootstrap and test for GCC8.4.0 on Darwin platforms.
General notes: * GCC on Darwin depends on the installed “binutils”, typically provided by a version of “Xcode" or “Xcode command line tools”. Unless noted otherwise, the bootstrap sequences here make use of the last available xcode command line tools and SDK for the platform version. * Two bootstrap and test sequences; 1) Using a version of GCC+Ada as the bootstrap compiler, allowing a build and test for Ada. 2) Using the Apple GCC or clang that relates to the installed command line tools. * In some cases, particularly with the earlier versions, the installed ‘atos’ is not really compatible with the versions of libsanitizer that are built with GCC8. In that case, it’s usually better to install a version of llvm-symbolizer and set the ASAN_SYMBOLIZER_PATH=/path/to/llvm-symbolizer. * I don’t have enough physical hardware to cover all of the versions directly, so some are (macOS hosted) VMs using VirtualBox. In these cases, timeouts are not necessarily significant (noted in the relevant results). cheers Iain --- Darwin19, (macOS 10.15) xcode 11.4b3 command line tools and SDK was current https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556866.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556867.html Darwin18 (10.14) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556864.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556865.html Darwin17 (10.13) here I have used XC 9.4, to avoid the errors caused by the deprecation of m32 flagged by later tools. (32b multilib still works fine here) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556862.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556863.html Darwin16 https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556860.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556861.html Darwin15 https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556858.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556859.html Darwin14 https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556856.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556857.html Darwin13 https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556853.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556854.html Darwin12 https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556851.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556852.html Darwin11 https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556848.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556849.html Darwin10 (X86_64) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556846.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556847.html Darwin10 (i686) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556843.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556844.html Darwin9 (i686) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556839.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556841.html Darwin9 (powerpc64) powercp64-darwin9 will not bootstrap with the system ld64 (the binary size now exceeds the [buggy] branch island limit in that tool). It is possible to make the bootstrap with an updated ld64 with that limit fixed, and using -Os for the stage1 compiler options. There are several public branches with fixed ld64 versions available - I used a version of https://github.com/iains/darwin-xtools https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556845.html Darwin9 (powerpc) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556840.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556842.html Darwin8 (i686, powerpc) Darwin8 cannot be built with the last release of XCode for the system (2.5) At least ld64-85.2.1 is needed. the results here were obtained with the cctools/ld64 sources from the xcode 3.1.4 source release. https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556837.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556838.html + two small patches: (1) for libstdc++ diff --git a/libstdc++-v3/configure.host b/libstdc++-v3/configure.host index 155a3cdea1b..d16d9519e43 100644 --- a/libstdc++-v3/configure.host +++ b/libstdc++-v3/configure.host @@ -240,11 +240,6 @@ case "${host_os}" in darwin8 | darwin8.* ) # For 8+ compatibility is better if not -flat_namespace. OPT_LDFLAGS="${OPT_LDFLAGS} -Wl,-single_module" -case "${host_cpu}" in - i[34567]86 | x86_64) -OPTIMIZE_CXXFLAGS="${OPTIMIZE_CXXFLAGS} -fvisibility-inlines-hidden" -;; -esac os_include_dir="os/bsd/darwin" ;; darwin*) (2) for Ada diff --git a/gcc/ada/adaint.c b/gcc/ada/adaint.c index 41434655865..74f843f2907 100644 --- a/gcc/ada/adaint.c +++ b/gcc/ada/adaint.c @@ -2351,7 +2351,10 @@ __gnat_number_of_cpus (void) #if defined (__linux__) || defined (__sun__) || defined (_AIX) \ || defined (__A
Successful bootstrap and test for GCC9.3.0 on Darwin platforms.
General notes: * GCC on Darwin depends on the installed “binutils”, typically provided by a version of “Xcode" or “Xcode command line tools”. Unless noted otherwise, the bootstrap sequences here make use of the last available xcode command line tools and SDK for the platform version. * Apple gcc-4.2.1 will no longer bootstrap GCC9+, the built stage1 compiler fails self-test with memory management errors, given that earlier GCC versions produce sucessful bootstraps on other platforms, I’m not pursuing fixes to the gcc.4.2.1 sources. * Two bootstrap and test sequences (where possible); 1) Using a version of GCC+Ada as the bootstrap compiler, allowing a build and test for Ada. 2) For Darwin11+, using the Apple clang that relates to the installed command line tools. * In some cases, particularly with the earlier versions, the installed ‘atos’ is not really compatible with the versions of libsanitizer that are built with GCC8. In that case, it’s usually better to install a version of llvm-symbolizer and set the ASAN_SYMBOLIZER_PATH=/path/to/llvm-symbolizer. * I don’t have enough physical hardware to cover all of the versions directly, so some are (macOS hosted) VMs using VirtualBox. In these cases, timeouts are not necessarily significant (noted in the relevant results). cheers Iain --- Darwin19, (macOS 10.15) xcode 11.4b3 command line tools and SDK was current https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556983.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556984.html Darwin18 (10.14) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556981.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556982.html Darwin17 (10.13) here I have used XC 9.4, to avoid the errors caused by the deprecation of m32 flagged by later tools. (32b multilib still works fine here) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556979.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556980.html Darwin16 https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556977.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556978.html Darwin15 https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556975.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556976.html Darwin14 (VM) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556973.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556974.html Darwin13 (VM) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556971.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556972.html Darwin12 (VM) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556969.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556970.html Darwin11 (VM) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556967.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556968.html Darwin10 (X86_64) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556966.html Darwin10 (i686) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556964.html Darwin9 (i686) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556961.html Darwin9 (powerpc64) powercp64-darwin9 will not bootstrap with the system ld64 (the binary size now exceeds the [buggy] branch island limit in that tool). It is possible to make the bootstrap with an updated ld64 with that limit fixed, and using -Os for the stage1 compiler options. There are several public branches with fixed ld64 versions available - I used a version of https://github.com/iains/darwin-xtools https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556963.html Darwin9 (powerpc) https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556962.html Darwin8 (i686, powerpc) Darwin8 cannot be built with the last release of XCode for the system (2.5) At least ld64-85.2.1 is needed. the results here were obtained with the cctools/ld64 sources from the xcode 3.1.4 source release. https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556959.html https://gcc.gnu.org/pipermail/gcc-testresults/2020-March/556960.html + two small patches: (1) for libstdc++ diff --git a/libstdc++-v3/configure.host b/libstdc++-v3/configure.host index 155a3cdea1b..d16d9519e43 100644 --- a/libstdc++-v3/configure.host +++ b/libstdc++-v3/configure.host @@ -240,11 +240,6 @@ case "${host_os}" in darwin8 | darwin8.* ) # For 8+ compatibility is better if not -flat_namespace. OPT_LDFLAGS="${OPT_LDFLAGS} -Wl,-single_module" -case "${host_cpu}" in - i[34567]86 | x86_64) -OPTIMIZE_CXXFLAGS="${OPTIMIZE_CXXFLAGS} -fvisibility-inlines-hidden" -;; -esac os_include_dir="os/bsd/darwin" ;; darwin*) (2) for Ada diff --git a/gcc/ada/adaint.c b/gcc/ada/adaint.c index 41434655865..74f843f2907 100644 --- a/gcc/ada/adaint.c +++ b/gcc/ada/adaint.c @@ -2351,7 +2351,10 @@ __gnat_number_of_cpus (void) #if defined (__linux__) || defined (__sun__) || defined (_AIX) \ |
Re: -stdlib=libc++?
unlvsur unlvsur via Gcc wrote: I think this would be great to support LLVM’s libc++ by be compatible with -stdlib=libc++ on clang. I have a patch for this, for next stage 1. (we are in stage 4 now, so not the right time for new features). thanks Iain
Re: Versions of Perl on GCC Prerequisites page
Thomas Koenig via Gcc wrote: now I remember, it was PR82856 which prompted this change (and I put in the wrong version number :-) Looking back at that PR, the uppper level of Perl as reqirement can probably be lifted. I would still prefer a test with --enable-maintainer-mode, to test that the orginal bug has actually diasppeared. For the record, the minimum version of perl in current use on Darwin is 5.8.6 on Darwin8 (the earliest version that is still buildable for master and other open branches). thanks Iain
Function signatures in extern "C".
Hi g++.dg/abi/guard3.C has: extern "C" int __cxa_guard_acquire(); Which might not be a suitable declaration, depending on how the ‘extern “C”’ is supposed to affect the function signature generated. IF, the extern C should make this parse as a “K&R” style function - then the TYPE_ARG_TYPES should be NULL (and the testcase is OK). However, we are parsing the decl as int __cxa_guard_acquire(void) (i.e. C++ rules on the empty parens), which makes the testcase not OK. This means that the declaration is now misleading (and it’s just luck that expand_call happens to count the length of the TYPE_ARG_TYPES list without looking to see what the types are) - in this case it happens to work out from this luck - since there’s only one arg so the length of the void args list agrees with what we want. —— So .. the question is “which is wrong, the test-case or the assignment of the TYPE_ARG_TYPES”? [we can’t easily diagnose this at this point, but I do have a patch to diagnose the case where we pass a void-list to expand_call and then try to expand a call to the callee with an inappropriate set of parms] (it’s trivial to fix the test-case as extern "C" int __cxa_guard_acquire(__UINT64_TYPE__ *);, I guess) thanks Iain
Re: Function signatures in extern "C".
Jonathan Wakely via Gcc wrote: On Sun, 6 Sep 2020 at 16:23, Iain Sandoe wrote: g++.dg/abi/guard3.C has: extern "C" int __cxa_guard_acquire(); Which might not be a suitable declaration, depending on how the ‘extern “C”’ is supposed to affect the function signature generated. IF, the extern C should make this parse as a “K&R” style function - then the TYPE_ARG_TYPES should be NULL (and the testcase is OK). However, we are parsing the decl as int __cxa_guard_acquire(void) (i.e. C++ rules on the empty parens), which makes the testcase not OK. That is the correct parse. Using extern "C" doesn't mean the code is C, it only affects mangling. It still has to follow C++ rules. In practice you can still link to the definition, because its name is just "__cxa_guard_acquire" irrespective of what parameter list is present in the declaration. Linking isn’t the problem in this case. The problem is that we arrive at “expand_call” with a function decl that says f(void) .. and a call parmeter list containing a pointer type. We happily pass the pointer in the place of the ‘void’ - because the code only counts the number of entries and there’s one - so it happens to work. .. that’s not true in the general case and for all calling conventions. (this is what I mean by it happens to work by luck below). This means that the declaration is now misleading (and it’s just luck that expand_call happens to count the length of the TYPE_ARG_TYPES list without looking to see what the types are) - in this case it happens to work out from this luck - since there’s only one arg so the length of the void args list agrees with what we want. —— So .. the question is “which is wrong, the test-case or the assignment of the TYPE_ARG_TYPES”? [we can’t easily diagnose this at this point, but I do have a patch to diagnose the case where we pass a void-list to expand_call and then try to expand a call to the callee with an inappropriate set of parms] (it’s trivial to fix the test-case as extern "C" int __cxa_guard_acquire(__UINT64_TYPE__ *);, I guess) But PR 45603 is ice-on-invalid triggered by the incorrect declaration of __cxa_guard_acquire. So the incorrect declaration is what originally reproduced the bug, and "fixing" it would make the test useless. Ah OK. It's probably worth adding a comment about that in the test. Yes - that would help (will add it to my TODO). Maybe the test should give a compile-time error and XFAIL, but fixing the declaration doesn't seem right. I guess (because the code is invalid) there’s not much motivation to make it more robust - e.g. diagnose the mismatch in the call(s) synthesized to __cxa_guard_acquire. It seems we only try to build these function decl(s) once - lazily - so that a wrong one will persist for the whole TU (and we don’t seem to check that the decl matches the itanium ABI - perhaps that’s intentional tho). cheers Iain
Re: Function signatures in extern "C".
Nathan Sidwell wrote: GCC has an extension on machaines with cxx_implicit_extern_c (what used to be !NO_IMPLICIT_EXTERN_C). On such targets we'll treat 'extern "C" void Foo ()' as-if the argument list is variadic. (or something approximating that) perhaps that is confusing things? maybe that’s the underlying reason for failing to diagnose the wrong code. On 9/6/20 4:43 PM, Iain Sandoe wrote: Jonathan Wakely via Gcc wrote: On Sun, 6 Sep 2020 at 16:23, Iain Sandoe wrote: g++.dg/abi/guard3.C has: extern "C" int __cxa_guard_acquire(); Which might not be a suitable declaration, depending on how the ‘extern “C”’ is supposed to affect the function signature generated. IF, the extern C should make this parse as a “K&R” style function - then the TYPE_ARG_TYPES should be NULL (and the testcase is OK). However, we are parsing the decl as int __cxa_guard_acquire(void) (i.e. C++ rules on the empty parens), which makes the testcase not OK. That is the correct parse. Using extern "C" doesn't mean the code is C, it only affects mangling. It still has to follow C++ rules. In practice you can still link to the definition, because its name is just "__cxa_guard_acquire" irrespective of what parameter list is present in the declaration. Linking isn’t the problem in this case. The problem is that we arrive at “expand_call” with a function decl that says f(void) .. and a call parmeter list containing a pointer type. We happily pass the pointer in the place of the ‘void’ - because the code only counts the number of entries and there’s one - so it happens to work. .. that’s not true in the general case and for all calling conventions. that is, “expand_call” does not expect to have to handle the case that the compiler is telling it conflicting information. AFAICT, that’s reasonable, I was unable to find a way to write normal user code [at least, C-family] that the compiler would accept producing this set of conditions (it seems that cases in this category have to be generated by the compiler internally). But PR 45603 is ice-on-invalid triggered by the incorrect declaration of __cxa_guard_acquire. So the incorrect declaration is what originally reproduced the bug, and "fixing" it would make the test useless. Ah OK. So, IIUC we’ve replaced an ICE-on-invalid with an “accepts invalid”, it seems? It's probably worth adding a comment about that in the test. Yes - that would help (will add it to my TODO). Perhaps the PR should be reopened with “accepts invalid”? thanks Iain
Re: Function signatures in extern "C".
Jonathan Wakely via Gcc wrote: On Mon, 7 Sep 2020 at 09:18, Iain Sandoe wrote: Perhaps the PR should be reopened with “accepts invalid”? My impression from the PR is that the reporter was using a different ABI, where the name isn't reserved. Maybe the testcase should only be accepted with -fno-threadsafe-statics or -ffreestanding or something to say "I'm doing things differently". Or we could just say that G++ reserves the Itanium ABI names unconditionally, even if it doesn't need to use them, in which case it would be accepts-invalid. Well, it’s a name in the implementation reserved namespace. The majority of GCC platforms executing this test will cause the compiler to generate a call to the function (and that call will have mismatched params). Not sure how many non-itanium ABI platforms we have at present. We say nothing for "-Wall -Wextra -pedantic" (In the end, I don’t have much of an axe to grind here - this fail came up when I added diagnostic code to expand_call to catch cases like this emitted accidentally by the Fortran FE). it seemed worth commenting at least. cheers Iain
Symbol + Constant output.
Hi Uros, I was looking into the test fails for the new keylocker-* testcases. Many are because of missing “_” (which seems to happen more often than not). These I can fix trivially. But some are because we have: name+constant(%rip) being emitted on Linux and constant+name(%rip) being emitted on Darwin. —— The reason is that Darwin is always PIC and so outputs (const:DI (plus:DI (symbol_ref:DI ("h2") [flags 0x2] 0x755e3c60 h2>) (const_int 16 [0x10]))) using - gcc/i386.c:output_pic_addr_const Linux outputs the same thing using - gcc/final.c:output_addr_const for the PLUS case final.c says: /* Some assemblers need integer constants to appear last (eg masm). */ for the PLUS case i386.c says: /* Some assemblers need integer constants to appear first. */ = So .. I could make a really tedious patch to match the different forms in the keylocker tests for Darwin .. .. but ISTM that maybe one of those comments is wrong / out of date - and the right thing would be to fix that. Any insight welcome, thanks Iain
Re: Symbol + Constant output.
Hi, Hongyu Wang via Gcc wrote: Maybe those scan-asm regexp are too strict and should be relaxed a bit. I agree with this, since with -fPIC the code produced would be different, just use symbol + constant may be too strict. I think the scan-assembler could be reduced to /* { dg-final { scan-assembler "(?:movdqu|movups)\[ \\t\]+\[^\n\]*%xmm1,\[^\n\r]*16" } } */ 1. might I suggest using the {} form of quoting these regexes - it makes them much more readable. … e.g. more like this: {(?:movdqu|movups)[\t]+[^\n]*%xmm1,[^\n\r]*16} 2. I think there are some syntax errors in the regexes for these tests (because it’s very hard to check them when using the “” quotes). "(?:movdqu|movups)\[ \\t\]+\[^\n\]*%xmm1,\[^\n\r]*16" …^ missing \ (in several places) 3. are you intending to update the tests? As for the comments on the asm output. 1) it would seem that both comments can’t be correct (since they contradict!) 2) AFAICT, None of the assemblers I use has any issue with either order 3) perhaps there’s no assembler in use that cares any more 4) clang produces symbol+offset for that case on Darwin (i.e. the same as final.c). thanks Iain
Re: Symbol + Constant output.
Hi. Hongyu Wang wrote: 3. are you intending to update the tests? Yes, so could you tell me what does missing “_” means? I have some trouble building darwin target for now. Darwin uses a USER_LABEL_PREFIX of ‘_’ (there are a small number of targets that do this). So public symbols begin with _ In the case that you match like: …. ^\n]*%xmm0[^\n\r]*k1 there’s no need to make any amendment (since the _ is covered by [^\n\r]). if you need to match 16+k1 … then for targets using USER_LABEL_PREFIX, it would need to be 16+_?k1 (so that it matches _k1 for them and k1 for Linux) OK? (if you want me to test a potential patch on Darwin, that’s also fine). As for the comments on the asm output. 1) it would seem that both comments can’t be correct (since they contradict!) 2) AFAICT, None of the assemblers I use has any issue with either order 3) perhaps there’s no assembler in use that cares any more 4) clang produces symbol+offset for that case on Darwin (i.e. the same as final.c). That means the i386.c part should align with final.c, but I can't make the decision, and I'm not sure if there is more failure in x86 tests with this change. agreed, it would need wide testing, and perhaps not urgent at this moment but it would be nice to make things consistent; it helps with maintainance. Iain
Re: Symbol + Constant output.
Hi, Hongyu Wang via Gcc wrote: I've adjust the testcase and now it only contains constant offset, since with -fPIC the mov target address does not contain any symbol in the assembler. Could you help to check the attached changes on darwin and see if they all get passed? LGTM. OK from a Darwin PoV, thanks Iain make check-gcc-c RUNTESTFLAGS=i386.exp=keylocker* === gcc Summary for unix/-m64 === # of expected passes120 === gcc Summary for unix/-m32 === # of expected passes120 === gcc Summary === # of expected passes240
Re: C++11 code in the gcc 10 branch
FX via Gcc wrote: but later I am getting further errors: ../../gcc/gcc/config/darwin.c:1357:16: error: no viable conversion from 'poly_uint16' (aka 'poly_int<2, unsigned short>') to 'unsigned int' unsigned int modesize = GET_MODE_BITSIZE (mode); ^ ~~~ ../../gcc/gcc/config/darwin.c:1752:28: error: invalid operands to binary expression ('poly_uint16' (aka 'poly_int<2, unsigned short>') and 'int') if (GET_MODE_SIZE (mode) == 8 1. confirmation that the C++11 code in aarch64-builtins.c is indeed a bug, and that a patch for it would be welcome 2. guidance about how to fix that next issue You are missing a patch from master that converted darwin.c to use the “proper” handling of poly_int16 instead of the workaround. 7ddee9cd99b, I think is the one. Iain
Re: GCC 10.2 Released
FX, Martin Liška wrote: On 12/23/20 11:49 AM, FX via Gcc wrote: Hi all, The gcc 10.2 release was 5 months ago today. A lot has happened in the gcc-10 branch since, in particular on aarch64. Could a new release be issued? It would make efforts at maintaining patches on top of the gcc-10 branch easier, in particular in view of the release of aarch64-apple-darwin machines. Hello. I understand your situation, but based on our release schedule, please expect 10.3 to be released at the beginning of March 2021, similarly to what we did for 9.3 and 8.3: GCC 9.3 release (2020-03-12) GCC 8.3 release (2019-02-22) as seen here: https://gcc.gnu.org/develop.html Note that making a release consumes some cycles mainly for release managers. Thanks for understanding, [FAOD I am not disagreeing with what you say, I personally have quite a backlog of backports for 10.x, not to mention fixes to make to coroutines that should also be applied there… ] Nevertheless: We have an unusual situation in that we have: - a major OS release that has incompatible numbering with the preceding ones. - a new architecture to support (which will not be ‘official’ in 10.x even if, by some miracle, it’s ready for 11). So I think it might be possible to help the Darwin ‘downstreams’ by making the equivalent of a “vendor” branches for Darwin - in this case [for open branches] based on some arbitrary point in time (e.g. 1.1.2021) rather than on a GCC dot release. For the closed branches that we want to be able to build on Darwin20, the base can be the last dot release. Those branches could live in users/darwin or vendors/darwin .. (we kinda started discussing that on irc one day) - I guess I don’t have a big axe to grind on which [but it’s probably better in one of those, than under my user or on github]. It would be clear that this is Darwin-specific [branch name amended, for example] (i.e. that test coverage was concentrated on the platform). This would not be something I’d want to make a habit of - as Martin says (even for regular maintainers) release cycles chew a lot of time and resources in wider testing. Open to other suggestions, of course, Iain
Re: about -stdlib=libc++ toggle. what about recognize v1 as libc++ if libstdc++ is installed at the same location with libc++?
unlvsur unlvsur wrote: <3E6B1CF4201340D8BCCB373AE127FCC7.png> Sent from Mail for Windows 10 The -stdlib= option is an “enabling” change; As things stand, the packaging of the libc++ headers (and libc++ itself) needs intervention by the distribution (e.g. to add a coroutine header or to install the library on linux). I don’t think ‘automagical' adding of the option is appropriate (at least not yet). Iain
Re: C++11 code in the gcc 10 branch
FX wrote: When are you going to apply your fix that Richard S. approved on the 21st? When I remember how to set up gcc’s git with write access, and remember how the new ChangeLog entries work. The times where I was a regular contributor were the CVS and SVN times. I also wanted to ask approval to commit this diff below, fixing aarch64_get_extension_string_for_isa_flags()’s prototype to align it with the actual function definition: diff --git a/gcc/config/aarch64/driver-aarch64.c b/gcc/config/aarch64/driver-aarch64.c index 8840a2d9486c..d99834c99896 100644 --- a/gcc/config/aarch64/driver-aarch64.c +++ b/gcc/config/aarch64/driver-aarch64.c @@ -27,8 +27,7 @@ #include "tm.h" /* Defined in common/config/aarch64/aarch64-common.c. */ -std::string aarch64_get_extension_string_for_isa_flags (unsigned long, - unsigned long); +std::string aarch64_get_extension_string_for_isa_flags (uint64_t, uint64_t); struct aarch64_arch_extension { Although I admit that’s almost trivial (and it breaks build on aarch64-darwin), I’d prefer to be sure and ask. Then I’ll commit the two patches, if you think that’s OK. If Richard approves the second patch (and you’re stuck for time) - then send me the patch(es) as attachments with the commit credits you want, and I can apply them for you. (thanks for the fixes) cheers & happy new year all, Iain
Re: C++11 code in the gcc 10 branch
FX wrote: If Richard approves the second patch (and you’re stuck for time) - then send me the patch(es) as attachments with the commit credits you want, and I can apply them for you. Both patches only needed on gcc-10, if you can commit that’s great, many thanks. bootstrapped / smoke tested on aarch64-linux-gnu and x86_64-darwin, pushed as r10-9187 and r10-9188. Iain
Re: GCC 10.3 Released (successful bootstraps for Darwin versions)
Richard Biener wrote: The GNU Compiler Collection version 10.3 has been released. I believe that this is the best release for Darwin in some time, it includes a) The first released version to support Darwin20 (macOS 11) b) Fixes for some long-standing serious bugs affecting older OS versions --- I have successfully bootstrapped the following configurations for Darwin: i686-apple-darwin8 : NOTEs: 1/ this requires at least the toolchain from xcode 3.1.4 to succeed (GCC 7.5 tested) 2/ two small patches for Ada and libstdc++ https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681030.html i686-apple-darwin9 : OK with xcode 3.1.4 or later GCC (5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681031.html i686-apple-darwin10 : OK with xcode 3.2.6 or later GCC (5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681032.html powerpc-apple-darwin9 : OK with xcode 3.1.4 or later GCC (5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681033.html x86_64-apple-darwin10 : OK with xcode 3.2.6 or later GCC (5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681034.html x86_64-apple-darwin11 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681035.html x86_64-apple-darwin12 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681036.html x86_64-apple-darwin13 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681037.html x86_64-apple-darwin14 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681038.html x86_64-apple-darwin15 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681039.html x86_64-apple-darwin16 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681040.html x86_64-apple-darwin17 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681041.html x86_64-apple-darwin18 : OK with xcode clang or GCC (7.5. tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681042.html x86_64-apple-darwin19 [AVX512] : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681043.html x86_64-apple-darwin19 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681044.html x86_64-apple-darwin20 : OK with xcode clang or GCC (7.5 tested) https://gcc.gnu.org/pipermail/gcc-testresults/2021-April/681045.html thanks Iain
Re: removing toxic emailers
Eric S. Raymond wrote: Paul Koning via Gcc : On Apr 14, 2021, at 4:39 PM, Ian Lance Taylor via Gcc wrote: So we don't get the choice between "everyone is welcome" and "some people are kicked off the list." We get the choice between "some people decline to participate because it is unpleasant" and "some people are kicked off the list." Given the choice of which group of people are going to participate and which group are not, which group do we want? My answer is "it depends". More precisely, in the past I would have favored those who decline because the environment is unpleasant -- with the implied assumption being that their objections are reasonable. Given the emergency of cancel culture, that assumption is no longer automatically valid. I concur on both counts. You (the GCC project) are no longer in a situation where any random person saying "your environment is hostile" is a reliable signal of a real problem. Safetyism is being gamed by outsiders for purposes that are not yours and have nothing to do with shipping good code. Complaints need to be discounted accordingly, to a degree that would not have been required before the development of a self-reinforcing culture of complaint and rage-mobbing around 2014. responding to Ian’s original statement: I am one of the people who would not be “here” if the environment was hostile. That is not a theoretical statement - I declined to contribute to one project already because of the hostility of the interactions. Although I love to be paid to work on GCC, the truth is that almost all my contributions are voluntary and I would not choose to spend my spare time in a conflicted environment, period. For those of us who are ‘freelance’ these lists and the IRC channel are pretty much our workplace, it needs to be civilised (for me anyway). responding in general to this part of the thread. * The GCC environment is not hostile, and has not been for the 15 or so years I’ve been part of the community. * We would notice if it became so, I’m not sure about the idea that the wool can be so easily pulled over our eyes. I confess to being concerned with the equation “code” > “conduct”; it is not so in my professional or personal experience. I have seen an engineering team suffer great losses of performance from the excesses of one (near genius, but very antisocial) member - the balance was not met. Likewise, it has been seen to be a poor balance when there are three gifted individuals in a household but one persecutes the other two (for diagnosed reaons).. again balance is not met One could see the equation becoming a self-fullfilling prophecy viz. * let us say compilers are complex, and any significant input over length of time will require a resonably competent engineer. * reasonably competent engineers with a good social habit are welcome everywhere * reasonably competent engineers with poor social habit are welcome in few places. - those few places will easily be able to demonstrate that their progress is made despite the poor atmosphere, with no way to know that something better was possible. responding to the thread in general.. * Please could we try to seek concensus? - it is disappointing to see people treating this as some kind of point-scoring game when to those working on the compiler day to day it is far from a game. Iain
Re: removing toxic emailers
Paul Koning wrote: On Apr 15, 2021, at 11:17 AM, Iain Sandoe wrote: ... responding in general to this part of the thread. * The GCC environment is not hostile, and has not been for the 15 or so years I’ve been part of the community. * We would notice if it became so, I’m not sure about the idea that the wool can be so easily pulled over our eyes. responding to the thread in general.. * Please could we try to seek consensus? - it is disappointing to see people treating this as some kind of point-scoring game when to those working on the compiler day to day it is far from a game. I'm not sure what the consensus is you're looking for. Let us start from the observations above and try to add in the issues that have arisen in the recent threads - and end with a proposal * One could be glib and suggest that discussions about governance and project process should be directed to a different (new) mailing list - but that does not solve the problem(s) it just moves them. - (however, it might still be valuable to folks who wish to have an automatic filter for these topics or have no interest in them). * I think we are all clear about the primary role of the gcc@ and gcc-patches@ lists - primarily technical discussion about current and future projects and patch review respectively. - we have a history of politely redirecting usage questions to the help list (while often answering them anyway), likewise with the irc channel. - I believe we also have a history of encouraging input and discussing the technical issues (reasonably) calmly. - to the best of my recollection I have never seen an idea excluded on any basis than technical content. * Without a specific list to process input on governance and project process, this list is a reasonable choice. ——— The observations above, copied from my first email, together with a belief that most of the current and potential contributor to GCC would prefer to function in a constructive environment, lead to the following proposition: * that, since the lists are generally constructive without additional management, (OK. there are occasional heated technical debates), it implies that this community by-and-large is already able to function without heavy-handed moderation. * It has been postulated that there could be valued technical input from people who have difficulty in interacting in a constructive manner (through no fault of their own). * no-one else would be making valued input, either they would be a spammer or intentionally acting in a destructive manner. - Let us propose that someone capable of working on a complex system such as a compiler would be able to read and act on a set of guidelines. - ergo, I propose that we have a set of guidelines to which someone who is being disruptive can be pointed. * (Probably?) no-one has any issue with a spammer being thrown off the list, for which I guess there is a process already - it would be reasonable to expect that genuine contributors (even with difficulties) would make an effort to follow guidelines - and that someone who was making no effort to do so is not really any different from a spammer. Of course, guidelines require debate (but I doubt that the right set would be much different from the obvious for this group). is seems to me that most of the strife in the last two weeks comes from a few key things: - attacking the person delivering a message rather than debating the message - introducing topics spurious and unrelated to the actual debate - trying to equate the process of this project with party or international Politics. === So .. in summary: 1/ I propose that we do have written guidelines, to which someone behaving in a non-constructive manner can be pointed. 2/ if those guidelines *are the consensus* of this group and someone is unable to follow them (given some reasonable chance to amend as is customary in matters such as employment law here, at least), then they are treated no differently from any other spam. * although one might lose some notionally valuable input, the judgement here is that the net benefit of such input is negative. 3/ I would recommend on the basis of another online community (about music) to which I belong, to suggest that Politics (party or international) and Religion are better discussed in other forums and are exceedingly unlikely to affect a technical decision on the progress of GCC - such discussions almost never end well. (I’d believe that any valid exception to the need to heed some political situation would be readily recognised by the participants here). 4/ It is likely that we can extract much of the basic guidelines from any other writing on communicating constructively - after all, it is how 99.99% of this list traffic is managed
Re: removing toxic emailers
Christopher Dimech wrote: Sent: Friday, April 16, 2021 at 7:21 AM From: "Iain Sandoe" To: "GCC Development" Subject: Re: removing toxic emailers Paul Koning wrote: On Apr 15, 2021, at 11:17 AM, Iain Sandoe wrote: ... responding in general to this part of the thread. * The GCC environment is not hostile, and has not been for the 15 or so years I’ve been part of the community. * We would notice if it became so, I’m not sure about the idea that the wool can be so easily pulled over our eyes. responding to the thread in general.. * Please could we try to seek consensus? - it is disappointing to see people treating this as some kind of point-scoring game when to those working on the compiler day to day it is far from a game. I'm not sure what the consensus is you're looking for. Let us start from the observations above and try to add in the issues that have arisen in the recent threads - and end with a proposal * One could be glib and suggest that discussions about governance and project process should be directed to a different (new) mailing list - but that does not solve the problem(s) it just moves them. - (however, it might still be valuable to folks who wish to have an automatic filter for these topics or have no interest in them). * I think we are all clear about the primary role of the gcc@ and gcc-patches@ lists - primarily technical discussion about current and future projects and patch review respectively. - we have a history of politely redirecting usage questions to the help list (while often answering them anyway), likewise with the irc channel. - I believe we also have a history of encouraging input and discussing the technical issues (reasonably) calmly. - to the best of my recollection I have never seen an idea excluded on any basis than technical content. * Without a specific list to process input on governance and project process, this list is a reasonable choice. ——— The observations above, copied from my first email, together with a belief that most of the current and potential contributor to GCC would prefer to function in a constructive environment, lead to the following proposition: * that, since the lists are generally constructive without additional management, (OK. there are occasional heated technical debates), it implies that this community by-and-large is already able to function without heavy-handed moderation. * It has been postulated that there could be valued technical input from people who have difficulty in interacting in a constructive manner (through no fault of their own). * no-one else would be making valued input, either they would be a spammer or intentionally acting in a destructive manner. - Let us propose that someone capable of working on a complex system such as a compiler would be able to read and act on a set of guidelines. - ergo, I propose that we have a set of guidelines to which someone who is being disruptive can be pointed. * (Probably?) no-one has any issue with a spammer being thrown off the list, for which I guess there is a process already - it would be reasonable to expect that genuine contributors (even with difficulties) would make an effort to follow guidelines - and that someone who was making no effort to do so is not really any different from a spammer. Of course, guidelines require debate (but I doubt that the right set would be much different from the obvious for this group). is seems to me that most of the strife in the last two weeks comes from a few key things: - attacking the person delivering a message rather than debating the message - introducing topics spurious and unrelated to the actual debate - trying to equate the process of this project with party or international Politics. === So .. in summary: 1/ I propose that we do have written guidelines, to which someone behaving in a non-constructive manner can be pointed. 2/ if those guidelines *are the consensus* of this group and someone is unable to follow them (given some reasonable chance to amend as is customary in matters such as employment law here, at least), then they are treated no differently from any other spam. Proposing the guidelines essentially means that the community accepts the fact that many of us are incapable of navigate everyday problems and dilemmas by making “right” decisions based on the use of good judgment and values rather than sterile sets of rules and conventions that typically disregard the individual, the particular, or the discrete. However, that isn’t what I wrote - what I wrote was the opposite; that history shows that almost everyone communicating on these lists can do so constructively *without* recourse to written guidelines. It is not the general case that has precipitated this discussion but, rather, the exception
Re: [patch] fix _OBJC_Module defined but not used warning
Hi Aldy, On 7 Jun 2015, at 12:37, Aldy Hernandez wrote: > On 06/07/2015 06:19 AM, Andreas Schwab wrote: >> Another fallout: >> >> FAIL: obj-c++.dg/try-catch-5.mm -fgnu-runtime (test for excess errors) >> Excess errors: >> : warning: '_OBJC_Module' defined but not used [-Wunused-variable] > > check_global_declarations is called for more symbols now. All the defined > but not used errors I've seen in development have been legitimate. For > tests, the tests should be fixed. For built-ins such as these, does the > attached fix the problem? > > It is up to the objc maintainers, we can either fix this with the attached > patch, The current patch is OK. > or setting DECL_IN_SYSTEM_HEADER. This seems a better long-term idea; however, I would prefer to go through all the cases where it would be applicable (including for the NeXT runtime) and apply that change as a coherent patch. At the moment dealing with the NeXT stuff is a bit hampered by pr66448. thanks, Iain
Re: [patch, build] Restore bootstrap in building libcc1 on darwin
Hi Rainer, On 4 Dec 2014, at 13:32, Rainer Orth wrote: > FX writes: > >> 10-days ping >> This restores bootstrap on a secondary target, target maintainer is OK with >> it. I think I need build maintainers approval, so please review. > > While in my testing, 64-bit Mac OS X 10.10.1 (x86_64-apple-darwin14.0.0) > now bootstraps, but 32-bit (i386-apple-darwin14.0.0) does not: > > ld: illegal text-relocation to 'anon' in > ../libiberty/pic/libiberty.a(regex.o) from > '_byte_common_op_match_null_string_p' in > ../libiberty/pic/libiberty.a(regex.o) for architecture i386 > collect2: error: ld returned 1 exit status > make[3]: *** [libcc1.la] Error 1 > make[2]: *** [all] Error 2 > make[1]: *** [all-libcc1] Error 2 For {i?86,ppc}-darwin* (i.e. m32 hosts) the PIC libiberty library is being incorrectly built. The default BOOT_CFLAGS are: -O2 -g -mdynamic-no-pic the libiberty pic build appends: -fno-common (and not even -fPIC) [NB -fPIC _won't_ override -mdynamic-no-pic, so that's not a simple way out] This means that the PIC library is being built with non-pic relocs. I have a local hack to allow build to proceed on m32-host-darwin (which I can send to you if you would like it) - however, it's not really a suitable patch for trunk... and I've not had time recently to try and fix this. If you would like to raise a PR for this, I can append the analysis there. cheers Iain
Re: [patch, build] Restore bootstrap in building libcc1 on darwin
On 4 Dec 2014, at 15:24, FX wrote: >> Can you try adding it as >> >> T_CFLAGS += -mdynamic-no-pic >> >> in gcc/config/t-tarwin instead? > -mdynamic-no-pic should be used to build *host* executable stuff for m32 darwin. It is not suitable for building shared libraries (hence the problem with building the PIC version of libiberty) and won't work for the target libaries for similar reasons. If you want a "quick fix", sure remove it from the boot cflags - but it's hiding a real issue which is that the pic build of libiberty does not cater for the possibility that the non-pic flags cannot simply be overridden by the pic ones. Of course, it's possible what darwin is the only affected target - but I'd not want to swear to that. Iain
Re: [patch, build] Restore bootstrap in building libcc1 on darwin
Hi Jeff, On 5 Dec 2014, at 22:40, Jeff Law wrote: > On 12/05/14 15:34, Dominique Dhumieres wrote: >>> As I've tried to explain, that is IMHO wrong though. >>> If what you are after is the -B stuff too, then perhaps: >>> ... >> >> Sorry but it does not work: > BTW, thanks for working with Jakub on this. We're going to be getting a > Darwin box for Jakub and other folks in the Red Hat team to use when the need > arises to dig into these kind of issues. That will be most welcome, we Darwin folks have only volunteer cycles to work on stuff, and those tend to get used up really quickly. Iain
Re: Regular darwin builds
Hi FX, On 15 Dec 2014, at 21:11, FX wrote: > Hi all, > > I’ve set up daily builds and regtests on a darwin box. The results should > appear directly on gcc-testresults > (https://gcc.gnu.org/ml/gcc-testresults/current/). > This should, in the future, help track down regressions affecting darwin > (PIC-related, in particular!) Great!! If you want me to build a bootstrap compiler including Ada - then let me know (I am sure that a darwin13 4.9 Ada compiler should be suitable for bootstrapping trunk on darwin14). I can make the most-stripped-down possible (c,c++,ada) and upload it wherever we can find some common space. > The hardware is new, the OS is the latest and greatest > (x86_64-apple-darwin14), and will be updated to keep it that way. However, > it’s not very powerful (it’s a Mac Mini). Bootstrap (C, C++, Obj-C, Obj-C++, > Fortran, Java, LTO) takes about 2 hours, regtesting both 32 and 64-bit takes > a bit over 3 hours. My 1-year-old mac mini with 16G of RAM takes (on darwin13) ~ 74mins for all langs including Ada, so memory is possibly the determining factor (unless we have some slow-down with darwin14 :( ). > I plan to schedule it for: > > - daily bootstrap + regtest of trunk > - weekly bootstrap of latest release branch (currently 4.9) > > If you have other ideas, I’m open to suggestions. I wonder if we can cook up an incremental build scheme, that tries to do every commit that doesn't touch config files. Also, if/when I get some ns (hopefully over the holiday period things might calm down enough) I'll push my WIP branches and prototype GAS ports to github (and some of the patches to the list). Dunno if you would consider it worth building the "vendor branch" for 4.9 once in a while. One of my colleagues got his hands on two G5s and I made up some darwin9 boot disks for him - we might well try to resurrect a powerpc-darwin9 build-bot (shared between GCC and LLVM, since we have ports for both to test). My machines are always running at capacity just testing patched branches, so not much use for this kind of stuff. cheers and thanks for the work! Iain
Re: Regular darwin builds
On 16 Dec 2014, at 19:38, Dominique d'Humières wrote: > Looking at your results for gcc 5.0, I see a lot of gcc.dg/ubsan/* failures I > don’t see in my tests. Any idea why? I think that there will be ubsan fails until the library is installed (which implies that the testing is not setting the right DYLD_LIBRARY_PATH). Iain > >> Le 15 déc. 2014 à 22:11, FX a écrit : >> >> Hi all, >> >> I’ve set up daily builds and regtests on a darwin box. The results should >> appear directly on gcc-testresults >> (https://gcc.gnu.org/ml/gcc-testresults/current/). >> This should, in the future, help track down regressions affecting darwin >> (PIC-related, in particular!). >> >> The hardware is new, the OS is the latest and greatest >> (x86_64-apple-darwin14), and will be updated to keep it that way. However, >> it’s not very powerful (it’s a Mac Mini). Bootstrap (C, C++, Obj-C, Obj-C++, >> Fortran, Java, LTO) takes about 2 hours, regtesting both 32 and 64-bit takes >> a bit over 3 hours. >> >> I plan to schedule it for: >> >> - daily bootstrap + regtest of trunk >> - weekly bootstrap of latest release branch (currently 4.9) >> >> If you have other ideas, I’m open to suggestions. >> >> FX >
Re: Regular darwin builds
On 16 Dec 2014, at 20:40, Dominique d'Humières wrote: > >> Another testsuite issue on darwin is that testsuite doesn’t clean up the >> .dSYM directories it generates. This gets really annoying on my autotester :( > > I have a patch for that, but Iain does not like it!-( Hmm .. I like the patch in principle, ... the problem is that it doesn't clean up when one does cross-testing or installed testing - so it needed tweaking to use the right approach to deleting files on the remote/host - we (erm, probably I, in truth) never got around to finding the right recipe. Might I suggest pulling it out of storage - and getting a review, perhaps from Mike who might be able to identify the best place to do the job. Iain
RE: Announcing Iain Sandoe as Objective-C/C++ maintainer
Hello Jeff, > I'm pleased to announce that Iain Sandoe has been appointed as a maintainer > for the Objective-C and Objective-C++ front-ends. Thanks! Let's hope there's time to fit some modernisation in the next stage #1. Iain > Iain, please add yourself as a maintainer for those front-ends in the > MAINTAINERS file. Index: ChangeLog === --- ChangeLog (revision 219524) +++ ChangeLog (working copy) @@ -1,3 +1,8 @@ +2015-01-13 Iain Sandoe + + * MAINTAINERS (Language Front Ends Maintainers): Add myself as an + objective-c/c++ front end maintainer. + 2015-01-13 Marek Polacek * MAINTAINERS (Reviewers): Add self as C front end reviewer. Index: MAINTAINERS === --- MAINTAINERS (revision 219524) +++ MAINTAINERS (working copy) @@ -160,6 +160,7 @@ java Andrew Haley java Tom Tromey objective-c/c++Mike Stump +objective-c/c++Iain Sandoe Various Maintainers
Re: [patch, build] Restore bootstrap in building libcc1 on darwin
On 26 Jan 2015, at 14:13, Rainer Orth wrote: > FX writes: > >>> The default BOOT_CFLAGS are: -O2 -g -mdynamic-no-pic >>> the libiberty pic build appends: -fno-common (and not even -fPIC) [NB >>> -fPIC _won't_ override -mdynamic-no-pic, so that's not a simple way out] >>> This means that the PIC library is being built with non-pic relocs. >> >> config/mh-darwin says that -mdynamic-no-pic is there because it “speeds >> compiles by 3-5%”. I don’t think we care about speed when the bootstrap >> fails, so can we remove it altogether? > > Darwin/i686 still doesn't bootstrap without this patch, I believe. > Shouldn't it be applied to trunk before GCC 5 ships, rather than leaving > that target broken? I'll try and post a patch to fix it properly this week.. Iain
clarification on the intent of X86_64 psABI vector return.
Hi, For a processor that supports SSE, but not AVX. the following code: typedef int __attribute__((mode(QI))) qi; typedef qi __attribute__((vector_size (32))) v32qi; v32qi foo (int x) { v32qi y = {'0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f', '0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f'}; return y; } produces a warning " warning: AVX vector return without AVX enabled changes the ABI [-Wpsabi]”. so - the question is what is the resultant ABI in the changed case (since _m256 is supported for such processors) Looking at the psABI v1.0 * pp24 Returning of Values The returning of values is done according to the following algorithm: • Classify the return type with the classification algorithm. … • If the class is SSE, the next available vector register of the sequence %xmm0, %xmm1 is used. • If the class is SSEUP, the eight byte is returned in the next available eightbyte chunk of the last used vector register. ... * classification algorithm : pp20 • Arguments of type __m256 are split into four eightbyte chunks. The least significant one belongs to class SSE and all the others to class SSEUP. • Arguments of type __m512 are split into eight eightbyte chunks. The least significant one belongs to class SSE and all the others to class SSEUP. * footnote on pp21 12 The post merger clean up described later ensures that, for the processors that do not support the __m256 type, if the size of an object is larger than two eightbytes and the first eightbyte is not SSE or any other eightbyte is not SSEUP, it still has class MEMORY. This in turn ensures that for processors that do support the __m256 type, if the size of an object is four eightbytes and the first eightbyte is SSE and all other eightbytes are SSEUP, it can be passed in a register. This also applies to the __m512 type. That is for processors that support the __m512 type, if the size of an object is eight eightbytes and the first eightbyte is SSE and all other eightbytes are SSEUP, it can be passed in a register, otherwise, it will be passed in memory. --- However : the case where the processor does *not* support __m256 but the first eightbyte *is* SSE and the following eighbytes *are* SSEUP is not clarified. The intent for SSE seems clear - use a reg The intent for following SSEUP is less clear. Nevertheless, it seems to imply that the intent for processors with SSE that the __m256 (and __m512) returns should be passed in xmm0:1(:3, maybe). figure 3.4 pp23 does not clarify xmm* use for vector return at all - only mentioning floating point. = status In any event, GCC passes the vec32 return in memory, LLVM conversely passes it in xmm0:1 (at least for the versions I’ve tried). which leads to an ABI discrepancy when GCC is used to build code on systems based on LLVM. Please could the X86 maintainers clarify the intent (and maybe consider enhancing the footnote classification notes to make things clearer)? - and then we can figure out how to deal with the systems that are already implemented - and how to move forward. (as an aside, in any event, it seems inefficient to pass through memory when at least xmm0:1 are already set aside for return value use). thanks Iain
Re: clarification on the intent of X86_64 psABI vector return.
> On 30 Oct 2018, at 13:26, H.J. Lu wrote: > > In Tue, Oct 30, 2018 at 4:28 AM Iain Sandoe wrote: >> > > Please open a bug to keep track. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87812
Re: GCC 7.4 Status Report (2018-11-22), GCC 7.4 RC1 scheduled for next week
> On 22 Nov 2018, at 09:30, Richard Biener wrote: > > > Status > == > > The GCC 7 branch is open for regression and documentation fixes. > > I plan to do a GCC 7.4 release in a few weeks starting with a > first release candidate at the end of next week, likely Nov. 29th. > > Please go through your assigned regression bugs and see which > fixes can be safely backported to the branch and consider smoke > testing the state of your favorite target architecture. In addition to a small number of Darwin-specific patches, I would like to back-port the fix for PR81033 which although it is not mentioned as a regression, is a wrong-code bug on 7.x. The fix was applied to trunk in August, and to 8.x in Oct, with no reported problems. I checked Darwin and Linux with the backport over the weekend with no issues shown. OK? Iain
Re: GCC 7.4 Release Candidate available from gcc.gnu.org
> On 29 Nov 2018, at 22:53, Bill Seurer wrote: > > On 11/29/18 04:24, Richard Biener wrote: >> A release candidate for GCC 7.4 is available from >> ftp://gcc.gnu.org/pub/gcc/snapshots/7.4.0-RC-20181129/ >> and shortly its mirrors. It has been generated from SVN revision 266611. >> I have so far bootstrapped and tested the release candidate on >> x86_64-unknown-linux-gnu. Please test it and report any issues to >> bugzilla. >> If all goes well I'd like to release GCC 7.4 at the end of next week. > > I bootstrapped and tested on powerpc64le-unknown-linux-gnu and > powerpc64-unknown-linux-gnu and all went well. I bootstrapped 266611 on x86_64-darwin10, x86_64-darwin16, powerpc-darwin9 (one small patch needed for Ada) without issue (test results nominal). Iain
Re: Replacing DejaGNU
> On 14 Jan 2019, at 13:53, Rainer Orth wrote: > > "MCC CS" writes: > >> I've been running the testsuite on my macOS, on which >> it is especially unbearable. I want to (at least try to) > > that problem may well be macOS specific: since at least macOS 10.13 > (maybe even 10.12; cannot currently tell for certain) make -jN check > times on my Mac mini skyrocketed with between 60 and 80% system time. > It seems this is due to lock contention on one specific kernel lock, but > I haven't been able to find out more yet. this PR mentions the compilation, but it’ even more apparent on test. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84257 * Assuming SIP is disabled. Some testing suggests that each DYLD_LIBRARY_PATH entry adds around 2ms to each exe launch. So .. when you’re doing something that’s a lot of work per launch, not much is seen - but when you’re doing things with a huge number of exe launches - e.g. configuring or running the test suite, it bites. A work-around is to remove the RPATH_ENVAR variable setting in the top level Makefile.in (which actually has the same effect as running things with SIP enabled) === Possible solution (partial hacks locally, not ready for posting) My current investigations (targeted at GCC 10 time frame, even if they are subsequently back-ported) is to replace all use of absolute pathnames in GCC libraries with @rpath/xxx and figure out a way to get the compiler to auto-add the relevant rpaths to exes (so that a fixed installation of GCC behaves the same way as it does currently). === DejaGNU on macOS... DejaGNU / expect are not fantastic on macOS, even given the comments above - it’s true. Writing an interpreter/funnel for the testsuite has crossed my mind more than once. However, I suspect it’s a large job, and it might be more worth investing any available effort in debugging the slowness in expect/dejaGNU - especially the lock contention that Rainer mentions. > There's no such problem on other targets, not even e.g. on Mac OS X 10.7. indeed. Iain
Re: Replacing DejaGNU
Hey Rainer, > On 15 Jan 2019, at 17:27, Rainer Orth wrote: >>> On 14 Jan 2019, at 13:53, Rainer Orth wrote: >>> >>> "MCC CS" writes: >>> I've been running the testsuite on my macOS, on which it is especially unbearable. I want to (at least try to) >>> >>> that problem may well be macOS specific: since at least macOS 10.13 >>> (maybe even 10.12; cannot currently tell for certain) make -jN check >>> times on my Mac mini skyrocketed with between 60 and 80% system time. >>> It seems this is due to lock contention on one specific kernel lock, but >>> I haven't been able to find out more yet. >> >> this PR mentions the compilation, but it’ even more apparent on test. >> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84257 >> >> * Assuming SIP is disabled. >> >> Some testing suggests that each DYLD_LIBRARY_PATH entry adds around 2ms to >> each exe launch. >> So .. when you’re doing something that’s a lot of work per launch, not much >> is seen - but when you’re doing things with a huge number of exe launches - >> e.g. configuring or running the test suite, it bites. >> >> A work-around is to remove the RPATH_ENVAR variable setting in the top >> level Makefile.in (which actually has the same effect as running things >> with SIP enabled) > > this change alone helped tremendously: a bootstrap on macOS 10.14 on > 20181103 took > 180041.05 real 96489.89 user180864.44 sys > > while the current one was only > >44886.30 real 74101.86 user 36879.75 sys > > However, not unexpectedly quite a number of new failures occur, > e.g. many (all?) plugin tests FAIL with > > cc1: error: cannot load plugin ./selfassign.so > dlopen(./selfassign.so, 10): Symbol not found: __ZdlPvm > Referenced from: ./selfassign.so > Expected in: /usr/lib/libstdc++.6.dylib > in ./selfassign.so > compiler exited with status 1 > > I'll still have to check which are affected this way. I’m afraid that with this (or with SIP enabled) “uninstalled testing” can’t work, the libraries have to be found from their intended installed path, so you have to “make install && make check” ** and remember to delete the install before building the next revision... >> === DejaGNU on macOS... >> >> DejaGNU / expect are not fantastic on macOS, even given the comments above >> - it’s true. Writing an interpreter/funnel for the testsuite has crossed >> my mind more than once. >> >> However, I suspect it’s a large job, and it might be more worth investing >> any available effort in debugging the slowness in expect/dejaGNU - >> especially the lock contention that Rainer mentions. > > Indeed: I found it when trying to investigate the high system time with > lockstat. However, I don't know a way how to relate the lock address > mentioned there to some lock in the darwin sources. Well.. let’s take this offline - or park it in a BZ somewhere, if you can be more specific - would be happy to poke at it a bit : if it’s a genuine OS bug, we can file a radar - but that doesn’t help the system versions out of support. (and there’s enough useful h/w out there that’s tied to 10.11 etc) Iain
Re: -fno-common
> On 28 Jan 2019, at 15:58, Bernhard Schommer > wrote: > > I would like to know if the handling of the option -fno-common has > changed between version 7.3 and 8.2 for x86. I tried it with the > default system version of OpenSUSE and for example: > > const int i; > > is placed in the .bss section. With a newer self-compiled version 8.2 > the same variable is placed in the section .rodata. I could not find > any information in the Changelog whether the behavior has changed and > thus would like to know if there was any change. If you look in of, say, http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf at section • 6.7.3 Type qualifiers there’s a footnote: • 132) The implementation may place a const object that is not volatile in a read-only region of storage. Moreover, the implementation need not allocate storage for such an object if its address is never used. --- So, I don’t think you can make any assumption about whether such items appear in bss or .rodata (or const on darwin, for example). HTH, Iain
Re: Iain Sandoe appointed Darwin co-maintainer
Hi David, > On 19 Apr 2019, at 20:28, David Edelsohn wrote: > > I am pleased to announce that the GCC Steering Committee has > appointed Iain Sandoe as Darwin co-maintainer. thanks to the SC and folks who’ve commented in public and private! > Iain, please update your listing in the MAINTAINERS file. Done. thanks, Iain
Re: Second GCC 9.1 Release Candidate available from gcc.gnu.org
Hi Jakub, > On 30 Apr 2019, at 14:12, Jakub Jelinek wrote: > > The second release candidate for GCC 9.1 is available from > > https://gcc.gnu.org/pub/gcc/snapshots/9.0.1-RC-20190430/ > ftp://gcc.gnu.org/pub/gcc/snapshots/9.0.1-RC-20190430 > > and shortly its mirrors. It has been generated from SVN revision 270689. powerpc-apple-darwin9 i686-apple-darwin10 x86_64-apple-darwin10 x86_64-apple-darwin14 x86_64-apple-darwin16 Have been boostrapped and tested using GCC as the bootstrap compiler (incl. Ada) x86_64-apple-darwin18 has been boostrapped using xc10.2 command line tools and SDK (no Ada). x86_64-apple-darwin18 bootstrap using GCC succeeded (incl. Ada), but testing is not complete yet. test results: https://gcc.gnu.org/ml/gcc-testresults/2019-05/msg00103.html https://gcc.gnu.org/ml/gcc-testresults/2019-05/msg00104.html https://gcc.gnu.org/ml/gcc-testresults/2019-05/msg00106.html https://gcc.gnu.org/ml/gcc-testresults/2019-05/msg00107.html https://gcc.gnu.org/ml/gcc-testresults/2019-05/msg00108.html https://gcc.gnu.org/ml/gcc-testresults/2019-05/msg00109.html thanks Iain
Re: Documentation for gcc 9.1 changes
> On 4 May 2019, at 03:41, Andrew Roberts wrote: > looking at the changes for configuration in gcc 9.1, I noticed: > > 1) New configure options > > OTOOL/OTOOL_FOR_TARGET: Which I assume from google is the Darwin ldd > replacement It’s actually the Darwin equivalent for many of the facilities of ‘objdump’, and is usually discovered automatically by the configure tool. It’s not relevant to any other target. > GDC_FOR_TARGET: Which with a bit of guess work I assume is the Gnu_D_Compiler > > Is this stuff documented anywhere? > > 2) D language documentation > > Also looking at the D documentation it appears missing in action: > > No manual here: https://gcc.gnu.org/onlinedocs/ > > and the release notes just say: "Support for the D programming language has > been added to GCC, implementing version 2.076 of the language and run-time > library." > > If this stuff is documented elsewhere a link to said documentation would be > useful. > > 3) Stdlibc++ > > Release notes reference parallel algorithms requiring TBB 2018 or newer, > again guess work suggests this is Thread Building Blocks. It would be nice to > explicitly say that, and provide links to implementations. > > How is TBB detected and selected? I didn't see any configure switches > relating to this either in the toplevel configure or stdc++ configure files. > Can it be built in tree etc? > > Also while TBB may not be a prerequisite should it be at least documented on > that page: https://gcc.gnu.org/install/prerequisites.html (or somewhere) > > The TBB release notes (written by Intel) seem to limit things to Intel or > compatible processors, wikipedia suggests a wider range (sparc?, powerpc). Is > ARM supported? Again it would be nice to document what range of systems this > can work on. > > Thanks > > Andrew > >
Re: GCC 9 linker error, missing GOMP_loop_nonmonotonic_dynamic_next
> On 6 May 2019, at 09:02, Jakub Jelinek wrote: > > On Mon, May 06, 2019 at 02:41:41PM +0200, FX wrote: >> Hi gcc and gfortran developers, >> >> While testing GCC 9.1.0 before shipping it as part of Homebrew for macOS, >> we’re seeing the following OpenMP-based failure when recompiling several >> software packages with GCC 9. It includes both C++ and Fortran codes, which >> were working fine with the exact same setup and GCC 8.3.0. >> >> The missing symbols we’re seeing are always in this list: >> _GOMP_loop_nonmonotonic_dynamic_next >> _GOMP_loop_nonmonotonic_dynamic_start >> _GOMP_loop_ull_nonmonotonic_guided_next >> _GOMP_loop_ull_nonmonotonic_guided_start > > Those are certainly exported from my GCC 9 libgomp.so.1.0.0 (at least on > Linux but I don't see how it could not be elsewhere). I can’t, at present, see how it would be different Darwin. > So, the most likely explanation would be you are compiling something with > GCC 9, but linking against GCC 8 or earlier version of libgomp. Could you file a PR with a reproducer - presumably even a trivial OpenMP program will fail? thanks Iain
Installation question.
Right now, we don’t install a “cc” [we install gcc] but we do install “c++” [ we also install g++, of course]. Some configure scripts (and one or two places in the testsuite) do try to invoke ‘cc’ which can lead to inconsistent tools being used, if a GCC install is ahead in the PATH of some other install which does provide cc. Is there a reason for this omission, or is it unintentional? thanks, Iain
Re: Installation question.
> On 13 May 2019, at 17:33, Joseph Myers wrote: > > On Sun, 12 May 2019, Segher Boessenkool wrote: > >> "cc" isn't POSIX, since over a decade I think. "c99" is POSIX, and it is >> a shell script calling whatever "gcc" is first in the PATH, on most distros. > > Note that correct semantics for "c99" mean it's not a trivial wrapper; > some option reordering is needed to follow the POSIX rule that -U options > take precedence over -D options regardless of ordering on the command > line; see discussion in bug 40960 regarding support for installing variant > driver programs such as c99 with such differences in how they behave. (I > don't know if any distributions actually have wrappers that deal with > that, e.g. by using different specs in their wrapper. Another such POSIX > issue is that according to POSIX, dlopen et al should be found without > needing to special any -l options, but that could be dealt with by > adjusting the libc.so linker script, on systems using glibc.) Darwin (Xcode release, not GCC) has stand-alone exes for c89 and c99 which appear to implement the semantics you describe. Adding these + “cc” to the Darwin GCC installation is part of what prompted my question. [the code for the Darwin c89/c99 is open-sourced and based on a freeBSD 2002 edition, so I probably don’t need to generate something new]. It seems, from this thread that there’s no specific reason for me _not_ to install a ‘cc’ for Darwin’s GCC installation - at least it will make c++/cc consistent. Iain
mfentry and Darwin.
Hi Uros, It seems to me that (even if it was working “properly”, which it isn't) ‘-mfentry’ would break ABI on Darwin for both 32 and 64b - which require 16byte stack alignment at call sites. For Darwin, the dynamic loader enforces the requirement when it can and will abort a program that tries to make a DSO linkage with the stack in an incorrect alignment. We previously had a bug against profiling caused by exactly this issue (but when the mcount call was in the post-prologue position). Actually, I’m not sure why it’s not an issue for other 64b platforms that use the psABI (AFAIR, it’s only the 32b case that’s Darwin-specific). Anyway, my current plan is to disable mfentry (for Darwin) - the alternative might be some kind of “almost at the start of the function, but needing some stack alignment change”, I’m interested in if you know of any compelling use-cases that would make it worth finding some work-around instead of disabling. thanks Iain
Re: mfentry and Darwin.
Hi Uros, > On 21 May 2019, at 19:36, Uros Bizjak wrote: > > On Tue, May 21, 2019 at 6:15 PM Iain Sandoe wrote: >> >> It seems to me that (even if it was working “properly”, which it isn't) >> ‘-mfentry’ would break ABI on Darwin for both 32 and 64b - which require >> 16byte stack alignment at call sites. >> >> For Darwin, the dynamic loader enforces the requirement when it can and will >> abort a program that tries to make a DSO linkage with the stack in an >> incorrect alignment. We previously had a bug against profiling caused by >> exactly this issue (but when the mcount call was in the post-prologue >> position). >> >> Actually, I’m not sure why it’s not an issue for other 64b platforms that >> use the psABI (AFAIR, it’s only the 32b case that’s Darwin-specific). > > The __fentry__ in glibc is written as a wrapper around the call to > __mcount_internal, and is written in such a way that it compensates > stack misalignment in a call to __mcount_internal. __fentry__ survives > stack misalignment, since no xmm regs are saved to the stack in the > function. Well, we can’t change Darwin’s libc to do something similar (and anyway the dynamic loader would also need to know that this was a special as well to avoid aborting the exe). ... however we could do a dodge where some shim code was inserted into any TU that used mfentry to redirect the call to an ABI-compliant launchpad… etc. etc. It seems we can’t instrument “exactly at the entry” .. only “pretty close to it”. >> Anyway, my current plan is to disable mfentry (for Darwin) - the alternative >> might be some kind of “almost at the start of the function, but needing some >> stack alignment change”, >> >> I’m interested in if you know of any compelling use-cases that would make it >> worth finding some work-around instead of disabling. > > Unfortunately, not from the top of my head… Well, I can’t either, so for now I’m going to make a patch to disable it (since it’s not fully working in any case) .. if anyone screams that there’s a major reduction in functionality - we can investigate some scheme as mentioned above. thanks Iain
Re: Long compile time of 'insn-extract.c'
Hi Thomas, > On 18 Jun 2019, at 17:40, Thomas Schwinge wrote: > > I normally don't pay too much attention to how GCC builds proceed, but > here, I was waiting for it to complete... ;-) > > Doing a native bootstrap build on x86_64-pc-linux-gnu, with > '--enable-checking=yes,extra,df,fold,rtl' (is that excessive, maybe?), I > just noticed that stage 2 (thus actually '-fno-checking') build of > 'insn-extract.c' took more than 80 min to complete (with 1.3 GiB RAM > resident, which seems "reasonable"). > > I know we have some such issues compiling GCC's large generated files, > but I don't think I noticed that one being so much slower than the > others. In terms of file size (which doesn't say too much, I > understand), it's with 432 KiB not exactly small, but not exactly huge > either: 'insn-automata.c' occupies 11 MiB, and in the '-j16' build on > "Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz" completed much earlier. > > I do notice that 'insn-extract.c' is special in that it contains exactly > one function, containing one huge 'switch' statement at its top level. > > Is this maybe an issue/regression that somebody else also has noticed > (recently?), and/or worth investigating? It also has a habit of OOMing on i686-darwin10 when built with —enable-checking=yes,rtl,tree (O1 is sufficient to suppress that issue tho). I hadn’t paid too much attention to the time taken, just that it breaks bootstrap with that checking recipe. Iain.
Re: Dropping support of repo files (tlink)
> On 20 Jun 2019, at 15:21, David Edelsohn wrote: > > On Thu, Jun 20, 2019 at 10:05 AM Martin Liška wrote: >> >> Hi. >> >> In order to not buffer stderr output in LTO mode, I would like to remove >> support for repo files (tlink). If I'm correctly it's only used by AIX >> target. Would it be possible to drop that for the future? Is it even >> used? > > AIX currently does not support GCC LTO, but the hope was that GCC > would not do anything to specifically inhibit that ability to > eventually support that feature. AIX currently needs collect2. I > guess that AIX could try to find another mechanism when it adds > support. Darwin needs collect2, but does not use the tlink facility. thanks Iain
Re: Dropping support of repo files (tlink)
> On 21 Jun 2019, at 11:28, Jonathan Wakely wrote: > > On Fri, 21 Jun 2019 at 11:22, Martin Liška wrote: >> >> On 6/20/19 9:53 PM, Richard Biener wrote: >>> On June 20, 2019 5:09:55 PM GMT+02:00, "Martin Liška" >>> wrote: On 6/20/19 4:21 PM, David Edelsohn wrote: > On Thu, Jun 20, 2019 at 10:05 AM Martin Liška wrote: >> >> Hi. >> >> In order to not buffer stderr output in LTO mode, I would like to remove >> support for repo files (tlink). If I'm correctly it's only used by AIX >> target. Would it be possible to drop that for the future? Is it even >> used? > > AIX currently does not support GCC LTO, but the hope was that GCC > would not do anything to specifically inhibit that ability to > eventually support that feature. AIX currently needs collect2. I > guess that AIX could try to find another mechanism when it adds > support. Yes, I'm fine with collect2. I'm more precisely asking about read_report_files that lives in tlink.c. If I understand correctly, it's parsing output of linker and tries to find template implementations in a .rpo files that live on a disk. That's a legacy functionality that I'm targeting to remove. >>> >>> IIRC -frepo also works on Linux? >> >> Heh, you are right ;). Is there are consumer of that infrastructure >> or can we just drop it? > > Anybody using option 2 at > https://gcc.gnu.org/onlinedocs/gcc/Template-Instantiation.html > > I have no idea if anybody is using that, but we should at least > deprecate it instead of just dropping a documented option without > warning. I should have been clearer about Darwin: collect2 is required because it wraps the calling of lto-wrapper and ld. FWIW Darwin also passes all the “-frepo” testcases, however, I’m not aware of anyone actually using case #2 from Jonathan’s post. So, AFAIK the tlink capability isn’t required for modern C++ on Darwin; but, maybe deprecation is a safer step. Iain
Re: Dropping support of repo files (tlink)
> On 21 Jun 2019, at 11:40, Martin Liška wrote: > > On 6/21/19 12:34 PM, Iain Sandoe wrote: >> >>> On 21 Jun 2019, at 11:28, Jonathan Wakely wrote: >>> >>> On Fri, 21 Jun 2019 at 11:22, Martin Liška wrote: >>>> >>>> On 6/20/19 9:53 PM, Richard Biener wrote: >>>>> On June 20, 2019 5:09:55 PM GMT+02:00, "Martin Liška" >>>>> wrote: >>>>>> On 6/20/19 4:21 PM, David Edelsohn wrote: >>>>>>> On Thu, Jun 20, 2019 at 10:05 AM Martin Liška wrote: >>>>>>>> >>>>>>>> Hi. >>>>>>>> >>>>>>>> In order to not buffer stderr output in LTO mode, I would like to >>>>>> remove >>>>>>>> support for repo files (tlink). If I'm correctly it's only used by >>>>>> AIX >>>>>>>> target. Would it be possible to drop that for the future? Is it even >>>>>>>> used? >>>>>>> >>>>>>> AIX currently does not support GCC LTO, but the hope was that GCC >>>>>>> would not do anything to specifically inhibit that ability to >>>>>>> eventually support that feature. AIX currently needs collect2. I >>>>>>> guess that AIX could try to find another mechanism when it adds >>>>>>> support. >>>>>> >>>>>> Yes, I'm fine with collect2. I'm more precisely asking about >>>>>> read_report_files >>>>>> that lives in tlink.c. If I understand correctly, it's parsing output >>>>>> of linker >>>>>> and tries to find template implementations in a .rpo files that live on >>>>>> a disk. >>>>>> That's a legacy functionality that I'm targeting to remove. >>>>> >>>>> IIRC -frepo also works on Linux? >>>> >>>> Heh, you are right ;). Is there are consumer of that infrastructure >>>> or can we just drop it? >>> >>> Anybody using option 2 at >>> https://gcc.gnu.org/onlinedocs/gcc/Template-Instantiation.html >>> >>> I have no idea if anybody is using that, but we should at least >>> deprecate it instead of just dropping a documented option without >>> warning. >> >> I should have been clearer about Darwin: >> >> collect2 is required because it wraps the calling of lto-wrapper and ld. >> >> FWIW Darwin also passes all the “-frepo” testcases, however, I’m not aware >> of anyone actually >> using case #2 from Jonathan’s post. >> >> So, AFAIK the tlink capability isn’t required for modern C++ on Darwin; but, >> maybe deprecation is a >> safer step. > > Thank you for the information. > > Yes, I would be fine to deprecate that for GCC 10.1 “thinking aloud” - would it work to deprecate (or even disallow immediately if no-one is using it) LTO + frepo? (so that non-LTO frepo continues as long as anyone needs it) Iain
Re: Dropping support of repo files (tlink)
> On 21 Jun 2019, at 13:49, Jan Hubicka wrote: > > I should have been clearer about Darwin: > > collect2 is required because it wraps the calling of lto-wrapper and ld. > > FWIW Darwin also passes all the “-frepo” testcases, however, I’m not > aware of anyone actually > using case #2 from Jonathan’s post. > > So, AFAIK the tlink capability isn’t required for modern C++ on Darwin; > but, maybe deprecation is a > safer step. Thank you for the information. Yes, I would be fine to deprecate that for GCC 10.1 >>> >>> “thinking aloud” - would it work to deprecate (or even disallow immediately >>> if no-one is using it) LTO + frepo? >>> (so that non-LTO frepo continues as long as anyone needs it) >> >> Does that even work? ;) It would need an intermediate compile-step to >> instantiate templates to >> another LTO object. > > It is iterating the linker executions so it definitly makes no sense. > One problem is that in the collect2 at the time of deciding whether > to open temporary file for the linker output (which is the problem > Martin tries to solve) we do not know yet whether we will do -flto > (because that will be decided by linker plugin later) or repo > (because it works by looking for .repo files only after the link-step > failed). for “non ELF” (at least for Darwin) we decide by scanning the object files using simple object - if any LTO section is seen, we spawn lto-wrapper, else we just proceed with a normal link. That used to be horribly inefficient (it was a new “nm & grep " process per object)- but we fixed that during 9 - still too heavy? > I suppose we could block -frepo when -flto is on the command line that > is most common case for LTO builds arriving to bit odd situation that at > link-time -flto will effect nothing but colors of diagnostics > (I would really like to have this solved for gcc 10 in some way) the point is - if it can’t work, then it may as well be disallowed - and then the LTO path knows it doesn’t need to do the tlink stuff. (and tlink can remain for non-LTO frepo) Iain
[testsuite] What's the expected behaviour of dg-require-effective-target shared?
Hi Christophe, we’ve been looking at some cases where Darwin tests fail or pass unexpectedly depending on options. It came as a surprise to see it failing a test for shared support (since it’s always supported shared libs). - It’s a long time ago, but in r216117 you added this to target-supports. # Return 1 if -shared is supported, as in no warnings or errors # emitted, 0 otherwise. proc check_effective_target_shared { } { # Note that M68K has a multilib that supports -fpic but not # -fPIC, so we need to check both. We test with a program that # requires GOT references. return [check_no_compiler_messages shared executable { extern int foo (void); extern int bar; int baz (void) { return foo () + bar; } } "-shared -fpic” } The thing is that this is testing two things: 1) if the target consumes -shared -fpic without warning 2) assuming that those cause a shared lib to be made it also tests that the target will allow a link of that to complete with undefined symbols. So Darwin *does* support “-shared -fpic” and is very happy to make shared libraries. However, it doesn’t (by default) allow undefined symbols in the link. So my question is really about the intent of the test: if the intent is to see if the target supports shared libs, then we should arrange for Darwin to pass - either by hardwiring it (since all Darwin versions do support shared) or by adding suitable options to suppress the error. if the intent is to check that the target supports linking a shared lib with undefined external symbols, then perhaps we need a different test for the “just supports shared libs” === (note, also the comment doesn’t match what’s actually done, but that’s prob a cut & pasto). thanks Iain
Re: [PATCH] Add .gnu.lto_.meta section.
> On 24 Jun 2019, at 14:31, Martin Liška wrote: > > On 6/24/19 2:44 PM, Richard Biener wrote: >> On Mon, Jun 24, 2019 at 2:12 PM Martin Liška wrote: >>> >>> On 6/24/19 2:02 PM, Richard Biener wrote: On Fri, Jun 21, 2019 at 4:01 PM Martin Liška wrote: > > On 6/21/19 2:57 PM, Jan Hubicka wrote: >> This looks like good step (and please stream it in host independent >> way). I suppose all these issues can be done one-by-one. > > So there's a working patch for that. However one will see following errors > when using an older compiler or older LTO bytecode: > > $ gcc main9.o -flto > lto1: fatal error: bytecode stream in file ‘main9.o’ generated with LTO > version -25480.4493 instead of the expected 9.0 > > $ gcc main.o > lto1: internal compiler error: compressed stream: data error This is because of your change to bitfields or because with the old scheme the header with the version is compressed (is it?). >>> >>> Because currently also the header is compressed. >> >> That was it, yeah :/ Stupid decisions in the past. >> >> I guess we have to bite the bullet and do this kind of incompatible >> change, accepting >> the odd error message above. >> I'd simply avoid any layout changes in the version check range. >>> >>> Well, then we have to find out how to distinguish between compression >>> algorithms. >>> > To be honest, I would prefer the new .gnu.lto_.meta section. > Richi why is that so ugly? Because it's a change in the wrong direction and doesn't solve the issue we already have (cannot determine if a section is compressed or not). >>> >>> That's not true, the .gnu.lto_.meta section will be always uncompressed and >>> we can >>> also backport changes to older compiler that can read it and print a proper >>> error >>> message about LTO bytecode version mismatch. >> >> We can always backport changes, yes, but I don't see why we have to. > > I'm fine with the backward compatibility break. But we should also consider > lto-plugin.c > that is parsing following 2 sections: > >91 #define LTO_SECTION_PREFIX ".gnu.lto_.symtab" >92 #define LTO_SECTION_PREFIX_LEN (sizeof (LTO_SECTION_PREFIX) - 1) >93 #define OFFLOAD_SECTION ".gnu.offload_lto_.opts" >94 #define OFFLOAD_SECTION_LEN (sizeof (OFFLOAD_SECTION) - 1) > >> ELF section overhead is quite big if you have lots of small functions. >>> >>> My patch is actually shrinking space as I'm suggesting to add _one_ extra >>> ELF section >>> and remove the section header from all other LTO sections. That will save >>> space >>> for all function sections. >> >> But we want the header there to at least say if the section is >> compressed or not. >> The fact that we have so many ELF section means we have the redundant version >> info everywhere. >> >> We should have a single .gnu.lto_ section (and also get rid of those >> __gnu_lto_v1 and __gnu_lto_slim COMMON symbols - checking for >> existence of a symbol is more expensive compared to existence >> of a section). > > I like removal of the 2 aforementioned sections. To be honest I would > recommend to > add a new .gnu.lto_.meta section. We can use it instead of __gnu_lto_v1 and > we can > have a flag there instead of __gnu_lto_slim. As a second step, I'm willing to > concatenate all > > LTO_section_function_body, > LTO_section_static_initializer > > sections into a single one. That will require an index that will have to be > created. I can discuss > that with Honza as he suggested using something smarter than function names. I already implemented a scheme (using three sections: INDEX, NAMES, PAYLOAD) for Mach-O - since it doesn’t have unlimited section count - it works - and hardly rocket science ;) - if one were to import the tabular portion of that at the start of a section and then the variable portion as a trailer … it could all be a single section. iain > Martin > >> >> Richard. >> >>> Martin >>> Richard. > > Martin
Re: [PATCH] Deprecate -frepo option.
> On 27 Jun 2019, at 19:21, Jan Hubicka wrote: > >> >> It's useful on targets without COMDAT support. Are there any such >> that we care about at this point? >> >> If the problem is the combination with LTO, why not just prohibit that? > > The problem is that at the collect2 time we want to decide whether to > hold stderr/stdout of the linker. The issue is that we do not know yet > if we are going to LTO (because that is decided by linker plugin) or > whether we do repo files (because we look for those only after linker > failure). you could pre-scan for LTO, as is done for the collect2+non-plugin linker. (similar to the comment below, that would take some time, but probably small c.f the actual link). Iain > We can look for repo files first but that could take some time > especially for large programs (probably not too bad compared to actual > linking later) or we may require -frepo to be passed to collect2. > > Honza >> >> Jason
Re: For which gcc release is going to be foreseen the support for the Coroutines TS extension?
Hello Sebastian, > On 26 Jul 2019, at 10:19, Florian Weimer wrote: > > * Sebastian Huber: >> On 06/06/2018 08:33, Florian Weimer wrote: >>> On 06/04/2018 07:36 PM, Jonathan Wakely wrote: >>>> On 4 June 2018 at 18:32, Marco Ippolito wrote: >>>>> Hi all, >>>>> >>>>> clang and VS2017 already support the Coroutines TS extensions. >>>>> For which gcc release is going to be foreseen the support for the >>>>> Coroutines TS extension? >>>> >>>> This has been discussed recently, search the mailing list. >>>> >>>> It will be supported after somebody implements it. >>> >>> If it is in fact implementable on top of the GNU ABI. Some variants >>> of coroutines are not. >> >> it seems C++20 will contain coroutines. Are there already some plans >> to support them in GCC? > > There is <http://gcc.gnu.org/wiki/cxx-coroutines>. Iain Sandoe is > working on it. The hope is to have enough of the implementation complete to post before the end of stage1, and for potential inclusion in GCC10 (if there’s enough review/amendment time). Still plenty to do before then tho! It’s also intended to present the background and status at the Cauldron. >> I ask this so that I can plan my work to >> support it for RTEMS. For example, are there plans to build them on >> top of ucontext? > > C++ coroutines are stackless. I don't think any new low-level run-time > support will be needed. correct, C++20 coroutines and threading mechanisms are orthogonal facilities; one can use (IS C++20) coroutines on top of a threaded system or in a single-threaded environment. Two places I see them as being a go-to facility in embedded systems are: * co-operative multi-tasking UIs on single-threaded platforms. * async I/O completion by continuations, rather than callbacks. Of course, there are likely to be many of the shared cases with non-embedded - there can be quite surprising implementations where coros can be used to hide memory access latency, for example. cheers Iain
Re: GCC 9.2 Release Candidate available from gcc.gnu.org
> On 5 Aug 2019, at 17:30, Bill Seurer wrote: > > On 8/5/19 8:16 AM, Jakub Jelinek wrote: >> The first release candidate for GCC 9.2 is available from >> https://gcc.gnu.org/pub/gcc/snapshots/9.2.0-RC-20190805/ >> ftp://gcc.gnu.org/pub/gcc/snapshots/9.2.0-RC-20190805 >> and shortly its mirrors. It has been generated from SVN revision 274111. >> I have so far bootstrapped and tested the release candidate on >> x86_64-linux and i686-linux. Please test it and report any issues to >> bugzilla. >> If all goes well, I'd like to release 9.2 on Monday, August 12th. > > I bootstrapped and tested powerpc64 BE on power 7 and power 8 and powerpc64 > LE on power 8 and power 9 and all looks well. I bootstrapped and tested 274111 on i686,powerpc-darwin9, x86_64-darwin16,17,18 (posted results), notable is that for the first time (ever?) powerpc-darwin9 reports 0 Ada fails. i686-darwin10 and x86_64-darwin15 bootstrapped, but testing is not yet complete. Iain
Successful builds of GCC 9.2 on Darwin
Successful builds have been made on i686-darwin{9,10), powerpc-darwin9 x86_64-darwin{10,11,12,13,14,15,16,17,18} bootstrapped with GCC (including Ada) test results are from https://gcc.gnu.org/ml/gcc-testresults/2019-08/msg01662.html to https://gcc.gnu.org/ml/gcc-testresults/2019-08/msg01673.html and x86_64-darwin18 bootstrapped with clang (no Ada test results: https://gcc.gnu.org/ml/gcc-testresults/2019-08/msg01674.html NOTES 1. *-darwin8 is currently not working with any live branch or trunk there’s a hope to address this before 9.3 2. anything earlier than darwin8 is going to require considerable work to build current branches, since the native installed tools do not meet the minimum requirements for GCC build. 3. some of the build/tests were performed on VMs; these are noted, and test timeouts should be regarded as “not significant” there. Iain
Re: For which gcc release is going to be foreseen the support for the Coroutines TS extension?
> On 20 Aug 2019, at 17:15, Florian Weimer wrote: > > * Richard Biener: > >> On August 20, 2019 5:19:33 PM GMT+02:00, Nathan Sidwell >> wrote: >>> On 7/26/19 6:03 AM, Iain Sandoe wrote: >>>> Hello Sebastian, >>>> >>>>> On 26 Jul 2019, at 10:19, Florian Weimer wrote: >>> >>>>> C++ coroutines are stackless. I don't think any new low-level >>> run-time >>>>> support will be needed. >>>> >>>> correct, C++20 coroutines and threading mechanisms are orthogonal >>>> facilities; one can use (IS C++20) coroutines on top of a threaded >>> system >>>> or in a single-threaded environment. >>>> >>>> Two places I see them as being a go-to facility in embedded systems >>> are: >>>> * co-operative multi-tasking UIs on single-threaded platforms. >>>> * async I/O completion by continuations, rather than callbacks. >>> >>> There are cases where the overhead of threads is too expensive. For >>> instance hiding (cache-missing) load latencies by doing other work >>> while >>> waiting -- a context switch at that point is far too expensive. >> >> But are coroutines so much lower latency (and a context switch does >> not involve cache misses on its own?). For doing useful work in this >> context CPU designers invented SMT... > > I think the idea is that you don't have to worry about synchronizing > multiple threads to reap the benefits from hardware parallelism. For > hiding memory access latency, that could be important because the > synchronization overhead could easily eat up any potential benefits. It seems to me that this is another “tool” in the “toolbox”, and as such will find use in more cases than the performance example given above. Some uses might have more to do with code clarity than performance, per se. * e.g. any code that’s written in terms of a state machine, or as a large body of callbacks, might prove to be a candidate for clearer representation as coroutines. * There’s a potentialy large reduction in held state for cases where the processing is organised as a series of transformations (as the original motivating example was a compiler where it was inconvenient to produce and consume that state between passes). * There’s clearly an industry quest for a lighter weight representation than threads and this is being driven (amongst others) by some of the largest users of massively parallel systems - so SMT is not meeting all their needs. Representatives from this group are saying that coroutines are part of the solution that they want (and backing that up by sponsoring the language development work and several compiler implementations). I don’t have visibility of their specific internal use-cases, of course, unfortunately. * Avoiding recasting problems into a form suitable for overt multi-threading * I expect (and already have one such person reporting bugs against my branch) that the embedded (no threads) space will find this a clearer way to implement some functionality (co-operative multitasking user interfaces springs to mind as noted in a previous post). I suppose that in the end we will not know how this tool is going to be used until it gets wider exposure (although there are some production uses of the LLVM implementation, it’s likely that people won’t commit heavily until it’s in the standard). 0.02GBP only .. Iain
Re: gcc vs clang for non-power-2 atomic structures
Hi Jim, > On 23 Aug 2019, at 00:56, Jim Wilson wrote: > > We got a change request for the RISC-V psABI to define the atomic > structure size and alignment. And looking at this, it turned out that > gcc and clang are implementing this differently. Consider this > testcase > > rohan:2274$ cat tmp.c > #include > struct s { int a; int b; int c;}; > int > main(void) > { > printf("size=%ld align=%ld\n", sizeof (struct s), _Alignof(struct s)); > printf("size=%ld align=%ld\n", sizeof (_Atomic (struct s)), > _Alignof(_Atomic (struct s))); > return 0; > } > rohan:2275$ gcc tmp.c > rohan:2276$ ./a.out > size=12 align=4 > size=12 align=4 > rohan:2277$ clang tmp.c > rohan:2278$ ./a.out > size=12 align=4 > size=16 align=16 > rohan:2279$ > > This is with an x86 compiler. A search for _Atomic in the latest (x86_64) psABI document shows no results. > I get the same result with a RISC-V > compiler. This is an ABI incompatibility between gcc and clang. gcc > has code in build_qualified_type in tree.c that sets alignment for > power-of-2 structs to the same size integer alignment, but we don't > change alignment for non-power-of-2 structs. Clang is padding the > size of non-power-of-2 structs to the next power-of-2 and giving them > that alignment. If the psABI makes no statement about what _Atomic should do, AFAIK the compiler is free to do something different (for the same entity) from the non-_Atomic version. e.g from 6.2.5 of n2176 (C18 draft) • 27 Further, there is the _Atomic qualifier. The presence of the _Atomic qualifier designates an atomic type. The size, representation, and alignment of an atomic type need not be the same as those of the corresponding unqualified type. Therefore, this Standard explicitly uses the phrase “atomic, qualified or unqualified type” whenever the atomic version of a type is permitted along with the other qualified versions of a type. The phrase “qualified or unqualified type”, without specific mention of atomic, does not include the atomic types. So the hole (in both cases) to be plugged is to add specification for _Atomic variants to the psABI…. … of course, it makes sense for the psABI maintainers to discuss with the compiler “vendors” what makes the best choice. (and it’s probably a significant piece of work to figure out all the interactions with __attribute__ modifiers) > Unfortunately, I don't know who to contact on the clang side, but we > need to have a discussion here, and we probably need to fix one of the > compilers to match the other one, as we should not have ABI > incompatibilities like this between gcc and clang. The equivalent of “MAINTAINERS” in the LLVM sources for backends is “llvm/CODE_OWNERS.TXT” which says that Alex Bradbury is the code owner for the RISC-V backend. HTH, Iain > The original RISC-V bug report is at >https://github.com/riscv/riscv-elf-psabi-doc/pull/112 > There is a pointer to a gist with a larger testcase with RISC-V results. > > Jim
Re: gcc vs clang for non-power-2 atomic structures
> On 23 Aug 2019, at 10:35, Jonathan Wakely wrote: > > On Fri, 23 Aug 2019 at 08:21, Iain Sandoe wrote: >> >> Hi Jim, >> >>> On 23 Aug 2019, at 00:56, Jim Wilson wrote: >>> >>> We got a change request for the RISC-V psABI to define the atomic >>> structure size and alignment. And looking at this, it turned out that >>> gcc and clang are implementing this differently. Consider this >>> testcase >>> >>> rohan:2274$ cat tmp.c >>> #include >>> struct s { int a; int b; int c;}; >>> int >>> main(void) >>> { >>> printf("size=%ld align=%ld\n", sizeof (struct s), _Alignof(struct s)); >>> printf("size=%ld align=%ld\n", sizeof (_Atomic (struct s)), >>> _Alignof(_Atomic (struct s))); >>> return 0; >>> } >>> rohan:2275$ gcc tmp.c >>> rohan:2276$ ./a.out >>> size=12 align=4 >>> size=12 align=4 >>> rohan:2277$ clang tmp.c >>> rohan:2278$ ./a.out >>> size=12 align=4 >>> size=16 align=16 >>> rohan:2279$ >>> >>> This is with an x86 compiler. >> >> A search for _Atomic in the latest (x86_64) psABI document shows no results. > > See https://groups.google.com/forum/#!topic/ia32-abi/Tlu6Hs-ohPY and > the various GCC bugs linked to from that thread. > > This is a big can of worms, and GCC needs fixing (but probably not > until the ABI group decide something). I agree about the size of the can of worms. However, there doesn’t seem to be any actual issue filed on: https://github.com/hjl-tools/x86-psABI Would it help if someone did? >>> I get the same result with a RISC-V >>> compiler. This is an ABI incompatibility between gcc and clang. gcc >>> has code in build_qualified_type in tree.c that sets alignment for >>> power-of-2 structs to the same size integer alignment, but we don't >>> change alignment for non-power-of-2 structs. Clang is padding the >>> size of non-power-of-2 structs to the next power-of-2 and giving them >>> that alignment. >> >> If the psABI makes no statement about what _Atomic should do, AFAIK >> the compiler is free to do something different (for the same entity) from >> the non-_Atomic version. > > Right, but GCC and Clang should agree. So it needs to be in the psABI. absolutely, it’s the psABI that’s lacking here - the compilers (as commented by Richard Smith in a referenced thread) should not be making ABI up… >> e.g from 6.2.5 of n2176 (C18 draft) >> >>• 27 Further, there is the _Atomic qualifier. The presence of the >> _Atomic qualifier designates an atomic type. The size, representation, and >> alignment of an atomic type need not be the same as those of the >> corresponding unqualified type. Therefore, this Standard explicitly uses the >> phrase “atomic, qualified or unqualified type” whenever the atomic version >> of a type is permitted along with the other qualified versions of a type. >> The phrase “qualified or unqualified type”, without specific mention of >> atomic, does not include the atomic types. >> >> So the hole (in both cases) to be plugged is to add specification for _Atomic >> variants to the psABI…. >> >> … of course, it makes sense for the psABI maintainers to discuss with >> the compiler “vendors” what makes the best choice. >> >> (and it’s probably a significant piece of work to figure out all the >> interactions >> with __attribute__ modifiers) >> >>> Unfortunately, I don't know who to contact on the clang side, but we >>> need to have a discussion here, and we probably need to fix one of the >>> compilers to match the other one, as we should not have ABI >>> incompatibilities like this between gcc and clang. >> >> The equivalent of “MAINTAINERS” in the LLVM sources for backends is >> “llvm/CODE_OWNERS.TXT” which says that Alex Bradbury is the code >> owner for the RISC-V backend. > > Tim Northover and JF Bastien at Apple should probably be involved too. > > IMO GCC is broken, and the longer we take to fix it the more painful > it will be for users as there will be more code affected by the > change. I suspect that (even if this is not a solution chosen elsewhere), for Darwin, there will have to be a target hook to control this since I can’t change the ABI retrospectively for the systems already released. That is, GCC is emitting broken code on those platforms anyway (since the platform ABI is determined by what clang/llvm produces). Iain
Re: gcc vs clang for non-power-2 atomic structures
Hi Joseph, > On 23 Aug 2019, at 17:14, Joseph Myers wrote: > > On Fri, 23 Aug 2019, Iain Sandoe wrote: > >> absolutely, it’s the psABI that’s lacking here - the compilers (as commented >> by Richard Smith in a referenced thread) should not be making ABI up… > > With over 50 target architectures supported in GCC, most of which probably > don't have anyone maintaining a psABI for them, you don't get support for > new language features that require an ABI without making some reasonable > default choice that allows the features to work everywhere and then > letting architecture maintainers liaise with ABI maintainers in the case > where such exist. yes. That’s perfectly reasonable However, it’s more than a little disappointing that X86, for which I would hope that the psABI _was_ considered supported, remains silent on the issue so long after it arose (I guess the interested parties with $ need to sponsor some work to update it). > (ABIs for atomics have the further tricky issue that there can be multiple > choices for how to map the memory model onto a given architecture; see > <https://www.cl.cam.ac.uk/~pes20/cpp/cpp0xmappings.html>. So it's not > just a matter of type sizes and alignment.) Indeed, I have tangled a bit with that trying to adapt libatomic to be better behaved on Darwin. > but where psABIs > specify something we do of course need to follow it (and the choice may be > OS-specific, not just processor-specific, where the ABI is defined by the > default compiler for some OS). agreed .. it seems highly likely for X86 as things stand - since there’s a bunch of things already out there with different ABIs baked in. > > Note: it's likely some front-end code, and stdatomic.h, might have to > change to handle the possibility of atomic types being larger than > non-atomic, as those end up using type-generic atomic load / store > built-in functions, and those certainly expect pointers to arguments of > the same size (when one argument is the atomic type and one non-atomic). It seems to me that whatever might be chosen for the definitive psABI / platform (i.e. arch + OS + version) going forward, we will need to support what has been emitted in the past. So a recommendation for suitable FE hooks (and preferably a way to make the C11 atomic match the std::atomic, even if this is “only” a QoI issue), would be worth addressing. thanks Iain > > -- > Joseph S. Myers > jos...@codesourcery.com
Re: GCC wwwdocs move to git done
Jonathan Wakely wrote: On Wed, 9 Oct 2019 at 01:28, Joseph Myers wrote: I've done the move of GCC wwwdocs to git (using the previously posted and discussed scripts), including setting up the post-receive hook to do the same things previously covered by the old CVS hooks, and minimal updates to the web pages dealing with the CVS setup for wwwdocs. Thanks, Joseph. +1 I would like to be able to preview changes to the website by using it from a local webserver. I realise that individual pages can be viewed in a browser / validated by uploading - but it would be nice to check connectivity etc. At the moment, I can’t identify the “mhc” program that is used in preparing the text (and too many unrelated hits from searches). Is there any other “gotcha” known to prevent doing this? thanks iain
Re: GCC wwwdocs move to git done
Christopher Faylor wrote: On Wed, Oct 09, 2019 at 01:25:30PM +0100, Iain Sandoe wrote: Jonathan Wakely wrote: On Wed, 9 Oct 2019 at 01:28, Joseph Myers wrote: I've done the move of GCC wwwdocs to git (using the previously posted and discussed scripts), including setting up the post-receive hook to do the same things previously covered by the old CVS hooks, and minimal updates to the web pages dealing with the CVS setup for wwwdocs. Thanks, Joseph. +1 I would like to be able to preview changes to the website by using it from a local webserver. I realise that individual pages can be viewed in a browser / validated by uploading - but it would be nice to check connectivity etc. At the moment, I can’t identify the “mhc” program that is used in preparing the text (and too many unrelated hits from searches). I think it's the "metahtml" processor: https://ftp.gnu.org/gnu/metahtml/ The binary is 20 years old and, somehow, the source code used to build it seems to have disappeared. Yeah - https://www.gnu.org/software/metahtml/ but the cvs co seems to hang on a lock :( something to investigate later... Iain
Re: GCC selftest improvements
Jeff Law wrote: > On 10/28/19 2:27 PM, Segher Boessenkool wrote: >> On Mon, Oct 28, 2019 at 01:40:03PM -0600, Jeff Law wrote: >>> On 10/25/19 6:01 PM, Gabriel Dos Reis wrote: Jason, Jonathan - is the situation on the terrain really that dire that C++11 (or C++14) isn't at all available for platforms that GCC is bootstrapped from? >>> The argument that I'd make is that's relatively uncommon (I know, I know >>> AIX) that bootstrapping in those environments may well require first >>> building something like gcc-9. >>> >>> I'd really like to see us move to C++11 or beyond. Sadly, I don't think >>> we have any good mechanism for making this kind of technical decision >>> when there isn't consensus. >> >> Which GCC version will be required to work as bootstrap compiler? Will >> 4.8.5 be enough? > I'd say gcc-9. What would we gain by making it 4.8 or anything else > that old? We’d have to use something older than 9 on earl(ier) Darwin since 9 will not bootstrap with the system-provided tools. ISTM that using a well-baked stable closed branch would be reasonable? (so 7.5 or earlier, assuming that the decision is made after 7.5 rolls) Iain
Re: Commit messages and the move to git
Segher Boessenkool wrote: > On Tue, Nov 05, 2019 at 11:07:05AM +, Jonathan Wakely wrote: >> On Mon, 4 Nov 2019 at 17:42, Joseph Myers wrote: >>> I've been using git-style commit messages in GCC for the past five years. >> >> I think I only started four years ago :-) > > I amr210190 Wed May 7 22:00:58 2014 + > Joseph is r214526 Tue Aug 26 17:06:31 2014 + > You are r218698 Sat Dec 13 20:44:06 2014 + > > Anyone else playing? ;-) I am - but probably for much less time than the others … I think I started around the time Joseph observed it would be a good idea to make the eventual commit messages nicer. Iain > > > Segher
[Darwin, testsuite, committed] Fix framework-1.c on later Darwin.
The test works by checking that a known framework path is accessible when the '-F' option is given. We need to find a framework path that exists across a range of Darwin versions, and parseable by GCC. This adjusts the test to use a header path that exists and is parsable from Darwin9 through Darwin19. tested with Darwin9 through Darwin19 SDKs on x86-64-darwin16, applied to mainline, thanks Iain gcc/testsuite/ChangeLog: 2019-11-06 Iain Sandoe * gcc.dg/framework-1.c: Adjust test header path. diff --git a/gcc/testsuite/gcc.dg/framework-1.c b/gcc/testsuite/gcc.dg/framework-1.c index 7e68683..de4adc3 100644 --- a/gcc/testsuite/gcc.dg/framework-1.c +++ b/gcc/testsuite/gcc.dg/framework-1.c @@ -1,4 +1,4 @@ /* { dg-do compile { target *-*-darwin* } } */ /* { dg-options "-F." } */ -#include +#include
Re: Proposal for the transition timetable for the move to GIT
Richard Biener wrote: On Fri, Jan 10, 2020 at 10:49 AM Richard Earnshaw (lists) wrote: On 10/01/2020 07:33, Maxim Kuvyrkov wrote: On Jan 9, 2020, at 5:38 AM, Segher Boessenkool wrote: On Wed, Jan 08, 2020 at 11:34:32PM +, Joseph Myers wrote: As noted on overseers, once Saturday's DATESTAMP update has run at 00:16 UTC on Saturday, I intend to add a README.MOVED_TO_GIT file on SVN trunk and change the SVN hooks to make SVN readonly, then disable gccadmin's cron jobs that build snapshots and update online documentation until they are ready to run with the git repository. Once the existing git mirror has picked up the last changes I'll make that read-only and disable that cron job as well, and start the conversion process with a view to having the converted repository in place this weekend (it could either be made writable as soon as I think it's ready, or left read-only until people have had time to do any final checks on Monday). Before then, I'll work on hooks, documentation and maintainer-scripts updates. Where and when and by who was it decided to use this conversion? Joseph, please point to message on gcc@ mailing list that expresses consensus of GCC community to use reposurgeon conversion. Otherwise, it is not appropriate to substitute one's opinion for community consensus. I've gone back through this thread (if I've missed, or misrepresented, anybody who's expressed an opinion I apologize now). Segher Boessenkool "If Joseph and Richard agree a candidate is good, then I will agree as well. All that can be left is nit-picking, and that is not worth it anyway:" Jeff Law "When Richard and I spoke we generally agreed that we felt a reposurgeon conversion, if it could be made to work was the preferred solution, followed by Maxim's approach and lastly the existing git-svn mirror." Richard Earnshaw (lists) FWIW, I now support using reposurgeon for the final conversion. And, of course, I'm taking Joseph's opinion as read :-) So I don't see any clear dissent and most folks just want to get this done. Just to chime in I also just want to get it done (well, I can handle SVN as well :P). I trust Joseph, too, but then from my POV anything not worse than the current mirror works for me. Thanks to Maxim anyway for all the work - without that we'd not switch in 10 other years... Btw, "consensus" among the quiet doesn't usually work and "consensus" among the most vocal isn't really "consensus". I think GCC (and FOSS) works by giving power to those who actually do the work. Doesn't make it easier when there are two, of course ;) Thanks to all those who’ve put (a lot of) effort into doing this work and those who’ve challenged and tested the conversions, for my part, I am also happy to take Joseph’s recommendation. One minor nit (and accepted that this might be too late). mail commit messages like this: [gcc-reposurgeon-8(refs/users/jsm28/heads/test-branch)] Test git hooks interaction with Bugzilla. seem to have a title stretched by redundant infomation ; at least "users/jsm28/test-branch” would seem to contain all the necessary information will commits in the user namespace appear on the mailing list in the end? thanks again Iain
Re: What is the status of macOS PowerPC support?
> On 26 Jan 2017, at 10:58, Jonathan Wakely wrote: > > On 25 January 2017 at 22:30, Segher Boessenkool wrote: >> On Wed, Jan 25, 2017 at 04:36:13PM +0100, FX wrote: >>> I am trying to determine what is the status of the powerpc-apple-darwin >>> target for GCC. The last released version of GCC for which a successful >>> build is reported is 4.9.1 >>> (https://gcc.gnu.org/ml/gcc-testresults/2014-07/msg02093.html), and the >>> last gcc-testresults post I could find was in April 2015 >>> (https://gcc.gnu.org/ml/gcc-testresults/2015-04/msg01438.html), for the GCC >>> 5 branch. >>> >>> Do GCC 5, GCC 6 and current trunk support powerpc-apple-darwin? The target >>> code is still there, apparently, and the compiler is not on the “obsolete” >>> list. >> >> It is actively being worked on (the latest commit is just over a month >> old it seems). It mostly works, too. It is in better shape than many >> other targets, I would say. > > Less than a month even: > https://gcc.gnu.org/ml/gcc-patches/2017-01/msg00553.html OK I have taken a while to reply to this, so that I could put some factual input. * The Darwin port(s) [x86 and ppc] are primarily maintained on a volunteer basis, which means progress is slow and sporadic. * For my part, I tend to prioritise fixes that will work across the whole Darwin range (which, in practical terms, means powerpc,i686-darwin9 … x86_64-darwin16). [Corresponding to OS X 10.5 => 10.12] * I usually test : x86_64-darwin1x (currently 15), powerpc-darwin9, i686-darwin10 (and sometimes x86_64-darwin10). * With lots of help from Segher and Bill (Schmidt), I do try to keep the PowerPC port afloat and actually 5.x/6.x is not in bad shape - it’s possible to build powerpc-darwin9 toolchains capable of building ~ 120 OSS projects including some of the more demanding ones (e.g. LLVM). 5.4.0: https://gcc.gnu.org/ml/gcc-testresults/2017-01/msg02969.html 6.3.0: https://gcc.gnu.org/ml/gcc-testresults/2017-01/msg02970.html trunk: * There are a small number of bootstrap fixes needed to trunk (attached to the test-results, for interest). * Sadly, trunk seems to have regressed for powerpc-darwin9 significantly in the last few weeks, I had clean Ada tests (at least for revision 242913, late November, and likely later), but don’t have time to track this down right now. https://gcc.gnu.org/ml/gcc-testresults/2017-01/msg02971.html Hopefully that helps clarify where we’re at presently, Iain
Re: Heads-Up: early LTO debug to land, breaking Mach-O / [X]COFF
Hi Richard, > On 12 May 2017, at 10:24, Richard Biener wrote: > > > This is a heads-up that I am in the process of implementing the last > of Jasons review comments on the dwarf2out parts of early LTO debug > support. I hope to post final patches early next week after thoroughly > re-testing everything. > > Note that Mach-O and [X]COFF support in the simple-object machinery > is still missing for the early LTO debug feature so I am going to > break LTOing with DWARF debuginfo on Darwin and Windows (CCing > maintainers). Mach-O support has been worked on a bit by Iain > and myself but the simple-object piece is still missing. Still on my TODO, and intending to do it for Mach-O - but rather short of cycles (if non-LTO is unaffected at least we have some breathing space). > A workaround is to use stabs on these targets with LTO. stabs isn’t going to work (well, if at all) on modern Darwin... > DWARF part: https://gcc.gnu.org/ml/gcc-patches/2016-11/msg01023.html > simple-object part: > https://gcc.gnu.org/ml/gcc-patches/2016-10/msg01733.html > > both still apply with some fuzz. I have a branch somewhere, will rebase - I’ve been getting stuff up to speed this week, Iain
Announcement : An AArch64 (Arm64) Darwin port is planned for GCC12
Folks, As many of you know, Apple has now released an AArch64-based version of macOS and desktop/laptop platforms using the ‘M1’ chip to support it. This is in addition to the existing iOS mobile platforms (but shares some of their constraints). There is considerable interest in the user-base for a GCC port (starting with https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96168) - and, of great kudos to the gfortran team, one of the main drivers is folks using Fortran. Fortunately, I was able to obtain access to one of the DTKs, courtesy of the OSS folks, and using that managed to draft an initial attempt at the port last year (however, nowhere near ready for presentation in GCC11). Nevertheless (as an aside) despite being a prototype, the port is in use with many via hombrew, macports or self-builds - which has shaken out some of the fixable bugs. The work done in the prototype identified three issues that could not be coded around without work on generic parts of the compiler. I am very happy to say that two of our colleagues, Andrew Burgess and Maxim Blinov (both from embecosm) have joined me in drafting a postable version of the port and we are seeking sponsorship to finish this in the GCC12 timeframe. Maxim has a lightning talk on the GNU tools track at LPC (right after the steering committee session) that will focus on the two generic issues that we’re tackling (1 and 2 below). Here is a short summary of the issues and proposed solutions (detailed discussion of any of the parts below would better be in new threads). - 1. GCC’s default model for nested functions uses a trampoline on the stack; requiring an executable stack. Executable stack is prohibited by the security model for Arm64 macOS. — We cannot punt on this because, in addition to the GCC extension to C to provide nested functions, the facility is used by Fortran (and Ada, of course), and many real-world examples fail without it (as reported to the prototype issues tracker). — the prototype has a hacked implementation of the descriptor-based solution proposed some time ago for Ada (that uses a reserved bit in the address). This is, of course, completely unacceptable for the final port - and does not work when there are callbacks to system functions. Andrew Burgess is pursuing a solution which is essentially based on our reasoning of the problem and the discussion in the descriptor thread here: https://gcc.gnu.org/legacy-ml/gcc-patches/2018-12/msg00066.html The mechanism would, of course, be opt-in (but is expected to be potentially useful to other OSs where the security model would require a non-executable stack). The summary is to allocate a memory area to contain the trampolines and to allocate (and free) these with the nesting of function pointer uses. It is allowed for Arm64 macOS to have such a section of memory (granted by permissions on the executable). Such an area cannot be both writable and executable at the same time, so we have to consider the implications of switching when an allocation or free occurs. This means modifying the nested function code to wrap allocations of trampolines in a cleanup. The current design also uses builtin functions to implement the actual management of the trampoline page(s), these would be part of libgcc. A note in passing: Apple’s implementation of libFFI and libObjC makes use of a similar technique - but the trampoline areas are placed in a code-signed SO (so that its authenticity is determined at load-time). This isn’t a suitable mechanism for GCC, since it would involve somehow getting a codesigned SO distributed with the OS. We are also assured that JIT code (which is more-or-less what this is) will be allowed for the forseeable future in macOS. 2. The darwinpcs (variant of the AAPCS64) has a lowering for function arguments that places them differently for ’normal’ and ‘variadic’ calls. — the prototype has an outrageous hack to allow it function. Maxim Blinov is working on a proper solution to this, thus: Many ports go to quite some lengths to track their register and stack use via the cummulative args mechanism. However, when the lowering is done to RTL for calls, the code there assumes that the layout of a stack-placed argument will be the same for named and unnamed cases. For the darwinpcs, named arguments are passed naturally-aligned on the stack (with necessary padding) - but unnamed arguments are passed word-aligned. The current proposed solution is to extend the use of the cummulative args mechanism to provide callbacks that allow the computed layout in the cum. args to be used when placing arguments on the stack. 3. GCC's current PCH model requires that we load the compiler executable at the same address each time; this is prohibited by the security model for Arm64 macOS, which does not allow non-PIE executables. — at present, there is no solution proposed for this and we will initially, at le
Re: libgfortran.so SONAME and powerpc64le-linux ABI changes
> On 8 Oct 2021, at 07:35, Thomas Koenig via Fortran > wrote: > > > On 07.10.21 17:33, Jakub Jelinek wrote: >>> It will also be a compatibility issue if users have code compiled on a LE >>> system with GCC 11 and earlier with KIND=16, it will not link with GCC 12. >> libgfortran ABI changed multiple times in the past already, e.g. the >> so.1 -> so.2 transition in 4.2 >> so.2 -> so.3 transition in 4.3 >> so.3 -> so.4 transition in 7 >> so.4 -> so.5 transition in 8 >> and users have coped. > > Yes, and it has always been a hassle for users, and we've been > criticized for it. > > This is currently a change which brings users on non-POWER-systems > (the vast majority) all pain and no gain. If this cannot be > avoided, I would at least try to fit in as much of other improvements > as there are possible. If one wanted to prioritize library SO name stability - then, perhaps, the approach Jonathan mentioned has been used for libstdc++ (add new symbols for ieee128 with a different mangling to the existing r/c_16 ..) would be preferable (the FE then has to choose the relevant symbol/ mangling depending on target). .. perhaps I missed where that idea was already ruled out (in which case sorry for the noise). Iain > > There's a PR for it somewhere, but I can think of three areas, none > of the small, and all require an ABI change: > > a) Get PDTs right (Paul?) > b) Make file descriptors conform to the C interop version > c) Remove the run-time parsing of I/O arguments and > replace them with a bit field. > > What I mean by the last one is that > > WRITE (unit,'(A)',ADVANCE="NO") > > we currently parse the "NO" at runtime, for every statement > execution. What we could be doing instead is to have > > dt_parm.0.advance = __gfortran_evaluate_yesno ("NO") > > where the latter function can be simplified at compile-time. > > We should strive to break the ABI as few times as possible. > > Best regards > > Thomas
Re: libgfortran.so SONAME and powerpc64le-linux ABI changes
Hi Thomas, recognising that this is complex - the intent here is to see if there are ways to partition the problem (where the pain falls does depend on the choices made). perhaps: *A library (interface, name) *B compiler internals *C user-facing changes > On 8 Oct 2021, at 17:26, Thomas Koenig wrote: > >> If one wanted to prioritize library SO name stability - then, perhaps, the >> approach Jonathan mentioned has been used for libstdc++ (add new >> symbols for ieee128 with a different mangling to the existing r/c_16 ..) >> would be preferable (the FE then has to choose the relevant symbol/ >> mangling depending on target). (A) the points here ^^ are: 1/ the SO name could be left as it is 2/ a target that defaulted to QP routines would still (perhaps under some command line flag be able to use the older implementation). I think both of those could be very helpful to end-users… > That's not all that would have to be changed. > Consider > > write (*,*) 1.0_16 > end program > > which is translated (using -fdump-tree-original) to > > >_gfortran_st_write (&dt_parm.0); >{ > static real(kind=16) C.3873 = 1.0e+0; > > _gfortran_transfer_real128_write (&dt_parm.0, &C.3873, 16); >} >_gfortran_st_write_done (&dt_parm.0); > > so we actually pass a separate kind number as well (why, I'm not sure). > We would have to go through libgfortran with a fine comb to find all > the occurrences. Probably some m4 hackery in iparm.m4 and ifunction.m4. > So, doable from the library side, if some work. (B) This is the second area of interest, the fact that changes in the compiler internals would be needed - and those take the time of the volunteers to implement (believe me, I am painfully aware of how that pressure falls). > Things get interesting for user code, calling a routine compiled > for double double with newer IEEE QP will result in breakage. That would not happen with the proposal above, since the library would have different entry points for the two formats. > We cannot use the KIND number to differentiate, because we must > assume that people have used KIND=16 and selected_real_kind(30) > interchangably, and we certainly do not want to nail people to > the old double double precision on hardware for which IEEE QP > is available. you don’t *have* to use the KIND number to differentiate to the library or the compiler (although some alternate, more flexible, token would have to be invented). (C) It’s the mapping between that internal token and the user’s view of the world that needs to be defined in terms of what the combination of platform and command line flags implies to the treatment of KIND=NN and selected_real_kind(). > So, KIND=15 for IEEE QP is out. (C) I must confess this kind of change is where things seem very tricky to me. changing how the language represents things seems to be something that would benefit from agreement between compiler vendors > It's not an easy problem, unfortunately. no. it is not. Iain
Re: libgfortran.so SONAME and powerpc64le-linux ABI changes
> On 8 Oct 2021, at 23:55, Thomas Koenig via Gcc wrote: > > > Hi Iain, > >>> Things get interesting for user code, calling a routine compiled >>> for double double with newer IEEE QP will result in breakage. >> That would not happen with the proposal above, since the library would >> have different entry points for the two formats. > > I meant the case where the user writes, with an old, KIND=16 is double > double compiler, > > subroutine foo(a) >real(kind=16) :: a >a = a + 1._16 > end subroutine foo > > and puts it in a library or an old object file, and in new code with an > IEEE QP compiler calls that with > > real(kind=16) :: a > a = 2._16 > call foo(a) > print *,a > > this will result in silent generation of garbage values, since Fortran > does not mangle the function name based on it types. For both cases, the > subroutine will be called foo_ (or MOD..._foo). hmm, well I thought about that case, but … isn’t this “pilot error”? if one compiles different parts of a project with incompatible command line options… … or, say, compile with -mavx512 and then try to run code on hardware without such a vector unit? Getting wrong answers silently can likely be done with other command line option mismatches. Iain > There is no choice - we need to make object code compiled by the user > incompatible between the old and the new format on the systems where > we make the switch. > > This is starting to look like a can of worms from Pandora's box, > if you pardon my mixed metaphors. > > Best regards > > Thomas
Re: libgfortran.so SONAME and powerpc64le-linux ABI changes
> On 9 Oct 2021, at 10:11, Thomas Koenig wrote: > > > On 09.10.21 01:18, Iain Sandoe wrote: >>> I meant the case where the user writes, with an old, KIND=16 is double >>> double compiler, >>> >>> subroutine foo(a) >>>real(kind=16) :: a >>>a = a + 1._16 >>> end subroutine foo >>> >>> and puts it in a library or an old object file, and in new code with an >>> IEEE QP compiler calls that with >>> >>> real(kind=16) :: a >>> a = 2._16 >>> call foo(a) >>> print *,a >>> >>> this will result in silent generation of garbage values, since Fortran >>> does not mangle the function name based on it types. For both cases, the >>> subroutine will be called foo_ (or MOD..._foo). >> hmm, well I thought about that case, but … isn’t this “pilot error”? >> if one compiles different parts of a project with incompatible command line >> options… >> … or, say, compile with -mavx512 and then try to run code on hardware without >> such a vector unit? >> Getting wrong answers silently can likely be done with other command line >> option mismatches. > > Again, it depends. > > What I was thinking about what a scenario where we do not change the > SONAME on POWER and rely on name mangling to get to the correct version > of a libgfortran library function. That could work, but it would not > work for user procedures. What I’m missing is why it has to. IF the user wants to use old (or not-owned) code compiled for double-double, then she must select a command-line option to use that on Power(New). Else The user recompiles all the code in her project to use the new shiny QP. I doubt there’s a way for this to proceed in a way that a user of Power (New) can avoid having to think it through - a new library SO name won’t help them with the interop with thier own (or not owned) code. > I have thought of mangling the name of all user Fortran procedures > which contain a reference to an IEEE QP in their argument list, like > _foo%QP, but that would fall down for C interop. So, no luck there. agreed, I did the same thought exercise. > So, a new SONAME at least on POWER is mandatory, I think. > > The question is still if we can avoid a new SONAME for >99% of our users > for no gain at all for them. Is there a possibility of aliasing the > SONAME somehow (grasping at straws here)? > > Best regards > > Thomas
Help with an ABI peculiarity
Hi Folks, In the aarch64 Darwin ABI we have an unusual (OK, several unusual) feature of the calling convention. When an argument is passed *in a register* and it is integral and less than SI it is promoted (with appropriate signedness) to SI. This applies when the function parm is named only. When the same argument would be placed on the stack (i.e. we ran out of registers) - it occupies its natural size, and is naturally aligned (so, for instance, 3 QI values could be passed as 3 registers - promoted to SI .. or packed into three adjacent bytes on the stack).. The key is that we need to know that the argument will be placed in a register before we decide whether to promote it. (similarly, the promotion is not done in the callee for the in-register case). I am trying to figure out where to implement this. * the code that (in regular cases) decides on such promotions is called _before_ we call target’s function_arg. * OVERRIDE_ABI_FORMAT seems to be called too early (we don’t have enough information on the function - to decide to set the PARM passed-as type). I’ve experimented with various schemes - specifically that tm.function_arg can alter the mode of the register in the appropriate cases, and then calls.c can act on the case that the mode has been changed by that callback. It seems probable that this approach can be made non-invasive - but... ... if someone can point me at a better solution - I’m interested. thanks Iain
Re: Help with an ABI peculiarity
> On 10 Jan 2022, at 10:46, Richard Sandiford wrote: > > Iain Sandoe writes: >> Hi Folks, >> >> In the aarch64 Darwin ABI we have an unusual (OK, several unusual) feature >> of the calling convention. >> >> When an argument is passed *in a register* and it is integral and less than >> SI it is promoted (with appropriate signedness) to SI. This applies when >> the function parm is named only. >> >> When the same argument would be placed on the stack (i.e. we ran out of >> registers) - it occupies its natural size, and is naturally aligned (so, for >> instance, 3 QI values could be passed as 3 registers - promoted to SI .. or >> packed into three adjacent bytes on the stack).. >> >> The key is that we need to know that the argument will be placed in a >> register before we decide whether to promote it. >> (similarly, the promotion is not done in the callee for the in-register >> case). >> >> I am trying to figure out where to implement this. >> >> * the code that (in regular cases) decides on such promotions is called >> _before_ we call target’s function_arg. >> >> * OVERRIDE_ABI_FORMAT seems to be called too early (we don’t have enough >> information on the function - to decide to set the PARM passed-as type). >> >> I’ve experimented with various schemes - specifically that tm.function_arg >> can alter the mode of the register in the appropriate cases, and then >> calls.c can act on the case that the mode has been changed by that callback. >> >> It seems probable that this approach can be made non-invasive - but... >> ... if someone can point me at a better solution - I’m interested. > > I agree there doesn't seem to be an out-of-the-box way of doing this. > I'm not sure about having two different ways of specifying promotion > though. (For one thing, it should be possible to query promotion > without generating “garbage” rtl.) In this case, it does not appear to be possible to do that without the cumulative args info .. so your next point is the logical design. > An alternative might be to make promote_function_arg a “proper” > ABI hook, taking a cumulative_args_t and a function_arg_info. > Perhaps the return case should become a separate hook at the > same time. > > That would probably require more extensive changes than just > updating the call sites, and I haven't really checked how much > work it would be, but hopefully it wouldn't be too bad. > > The new hook would still be called before function_arg, but that > should no longer be a problem, since the new hook arguments would > give the target the information it needs to decide whether the > argument is passed in registers. Yeah, this was my next port of call (I have looked at it ~10 times and then decided “not today, maybe there’s a simpler way”). thanks Iain
Re: Help with an ABI peculiarity
Hi Florian, > On 10 Jan 2022, at 08:38, Florian Weimer wrote: > > * Jeff Law via Gcc: > >> Most targets these days use registers for parameter passing and >> obviously we can run out of registers on all of them. The key >> property is the size/alignment of the argument differs depending on if >> it's pass in a register (get promoted) or passed in memory (not >> promoted). I'm not immediately aware of another ABI with that >> feature. Though I haven't really gone looking. > > I think what AArch64 Darwin does is not compatible with a GCC extension > that allows calling functions defined with a prototype without it (for > pre-ISO-C compatibility). AFAIU the implementation: In the case that a call is built and no prototype is available, the assumption is that all parms are named. The promotion is then done according to the C promotion rules. [for the number of args that can be passed in int regs] the callee will happen to observe the same rules in this case. It will, however, break once we overflow the number of int regs.. :/ The case that is fundamentally broken from scratch is of a variadic function called without a prototype - since the aarch64-darwin ABI places unnamed parms differently. So that the absence of a prototype causes us to place all args as if they were named. Wmissing-prototype Wstrict-prototypes would wisely be promoted to errors for this platform, (the ABI is obviously not up for change, since it’s already on millions of devices). > Given that, anyone defining an ABI in > parallel with a GCC implementation probably has paused, reconsidered > what they were doing, My guess is that this step was omitted - i.e. the port was designed in the LLVM framework. I can raise a query with the ABI owners, I guess. > and adjusted the ABI for K&R compatibility. FWIW, we bootstrap sucessfully including the K&R code in intl/ Given we have 8 int regs available, probably many calls will work .. As of now, I must assume that what is broken by the cases above will remain broken, and I just need to find a way to implement the cases that will work (i.e. when proper prototypes are available) thanks Iain
Re: Many analyzer failures on non-Linux system (x86_64-apple-darwin)
Hi FX, > On 15 Jan 2022, at 14:19, FX via Gcc wrote: > >> The purpose of these asm tests is to verify that the analyzer doesn't >> get confused by various inline assembler directives used in the source >> of the Linux kernel. So in theory they ought to work on any host, with >> a gcc configured for a suitable target. >> >> These tests are marked with "dg-do assemble" directives, which I'd >> hoped would mean it uses -S for the tests (to make a .s file), but >> looking at a log locally, it appears to be using -c (to make a .o >> file), so maybe that's what's going wrong for you as well? > > The tests even compiled with -S still fail: I think the test should be “dg-do compile” to stop at assembler output … .. the stuff below indicates it is still trying to assemble the .s file. > > spawn -ignore SIGHUP /Users/fx/ibin/gcc/xgcc -B/Users/fx/ibin/gcc/ > exceptions_enabled42475.cc -fdiagnostics-plain-output -S -o excep > tions_enabled42475.s > FAIL: gcc.dg/analyzer/torture/asm-x86-linux-cpuid-paravirt-1.c -O1 (test > for excess errors) > Excess errors: > /Users/fx/gcc/gcc/testsuite/gcc.dg/analyzer/torture/asm-x86-linux-cpuid-paravirt-1.c:27:3: > warning: 'asm' operand 6 probably does not match constraints > /Users/fx/gcc/gcc/testsuite/gcc.dg/analyzer/torture/asm-x86-linux-cpuid-paravirt-1.c:27:3: > error: impossible constraint in 'asm' > > It’s the same for the other four. > > > gcc.dg/analyzer/asm-x86-lp64-1.c is slightly different, there it’s an > assembler error: > > > /var/folders/_8/7ft0tbns6_l87s21n4s_1sc8gn/T//cc4b3ybm.s:160:20: > error:unexpected token in '.section' directive >.pushsection .text > ^ > /var/folders/_8/7ft0tbns6_l87s21n4s_1sc8gn/T//cc4b3ybm.s:162:2: error: > unknown directive >.type add_asm, @function >^ > /var/folders/_8/7ft0tbns6_l87s21n4s_1sc8gn/T//cc4b3ybm.s:167:13: error: > .popsection without corresponding .pushsection >.popsection These ^^ are ELF-isms***, so will not work on Darwin (but if the test does not need to assemble, then that is academic). ## Builtin-related failures Those four cases fail: gcc.dg/analyzer/data-model-1.c gcc.dg/analyzer/pr103526.c gcc.dg/analyzer/taint-size-1.c gcc.dg/analyzer/write-to-string-literal-1.c >> >> Can you file a bug about this and attach the preprocessed source from >> the test (using -E). > > Done, it is https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104042 I made a comment about how we might work around this for Darwin - but OTOH, perhaps they should work with _FORTIFY_SOURCE != 0 Iain *** no particular reason why Darwin could not have push/pop section but that’s not implemented in either cctools or LLVM-based assemblers for mach-o at present.
Re: Help with an ABI peculiarity
Hi Richard, > On 20 Jan 2022, at 22:32, Richard Sandiford wrote: > > Iain Sandoe writes: >>> On 10 Jan 2022, at 10:46, Richard Sandiford >>> wrot>> An alternative might be to make promote_function_arg a “proper” >>> ABI hook, taking a cumulative_args_t and a function_arg_info. >>> Perhaps the return case should become a separate hook at the >>> same time. >>> >>> That would probably require more extensive changes than just >>> updating the call sites, and I haven't really checked how much >>> work it would be, but hopefully it wouldn't be too bad. >>> >>> The new hook would still be called before function_arg, but that >>> should no longer be a problem, since the new hook arguments would >>> give the target the information it needs to decide whether the >>> argument is passed in registers. >> >> Yeah, this was my next port of call (I have looked at it ~10 times and then >> decided “not today, maybe there’s a simpler way”). … and I did not have a chance to look at this in the meantime … > BTW, finally catching up on old email, I see this is essentially also > the approach that Maxim was taking with the TARGET_FUNCTION_ARG_BOUNDARY > patches. What's the situation with those? I have the patches plus amendments to make use of their new functionality on the development branch, which is actually in pretty good shape (not much difference in testsuite results from other Darwin sub-ports). Maxim and I need to discuss amending the TARGET_FUNCTION_ARG_BOUNDARY changes to account for Richard (B)’s comments. Likewise, I need to tweak the support for heap allocation of nested function trampolines to account for review comments. As always, it’s a question of fitting everything in… thanks Iain
Re: -stdlib=libc++?
Hi Shivam, > On 2 Apr 2022, at 06:57, Shivam Gupta wrote: > > I saw your last year's mail for the same topic on the GCC mailing list > -https://gcc.gnu.org/pipermail/gcc/2020-March/000230.html. The patch was applied to GCC-11 (so is available one GCC-11 branch and will be on GCC-12 when that is released). > > I tried today but this option is still not available. The option has to be configured when the compiler is built, that also means that you have to install (and point the configure to) a suitable set of libc++ headers from the LLVM project (e.g. there is a set here: https://github.com/iains/llvm-project/tree/9.0.1-gcc-stdlib). Generally, GCC is very compatible with the libc++ headers (the changes I made on that branch were mostly to deal with being in std:: for GCC and std::experimental:: for LLVM-9). For LLVM libc++ earlier than 9 there is a missing symbol that GCC uses - but that can be worked around too. There have been some changes in more recent (in particular, LLVM-14/main) libc++ that should make it more compatible. Of course, you should pick a version of the libc++ headers than matches the version used on your system (9 was used for quite a long time, but recent xcode headers are newer). Given that this involves cross-project sources and choosing a suitable set, probably it is a job for the distributions (e.g. homebrew, macports etc) to arrange or, for self-built compilers, following in the general comments above. FWIW, I have used this to build quite a few OSS projects on a number of Darwin versions (hence the comment about GCC being very compatible with libc++). thanks, Iain.
Re: -stdlib=libc++?
Hi Shivam, > On 2 Apr 2022, at 17:48, Shivam Gupta wrote: > > May I ask why we need to specify --with-gxx-libcxx-include-dir= at > compile/configure time of GCC? The libc++ headers are not part of a base system install (on Darwin they are part of either Xcode or Command Line Tools installations). On other platforms, they will be an optional install. It seems unhelpful to enable an option that will not work (without knowing where to find the headers, -stdlib=libc++ cannot work). For GCC, the default is to use -stdlib=libstdc++, and that is part of the compiler’s install so that it can be located without extra configuration, and it does not require the -stdlib option to work. > While in clang equivalent, -stdlib= doesn't require so. libc++ is the default for clang and is part of the standard compiler distribution (so it can be located without additional configuration). OTOH, I believe that you will find that to make -stdlib=libstdc++ work will generally require some cmake values to point to the GCC installation (On macOS/Darwin there is a default that points to the old apple-gcc-4.2.1 installation [for Darwin11-16], but that is not necessarily the GCC version you would be using there, either). In summary, since neither compiler “knows” where to find the other, some configuration is required in the general case to find the non-native C++ runtime. Iain
Re: [modules] Preprocessing requires compiled header unit modules
> On 22 Apr 2022, at 15:08, Boris Kolpackov wrote: > > Ben Boeckel writes: > >> On Thu, Apr 21, 2022 at 06:05:52 +0200, Boris Kolpackov wrote: >> >>> I don't think it is. A header unit (unlike a named module) may export >>> macros which could affect further dependencies. Consider: >>> >>> import "header-unit.hpp"; // May or may not export macro FOO. >>> >>> #ifdef FOO >>> import "header-unit2.hpp"; >>> #endif >> >> I agree that the header needs to be *found*, but scanning cannot require >> a pre-existing BMI for that header. > > Well, if scanning cannot require a pre-existing BMI but a pre-existing > BMI is required to get accurate dependency information, then something > has to give. > > You hint at a potential solution in your subsequent email: > >> Can't it just read the header as if it wasn't imported? AFAIU, that's >> what GCC did in Jan 2019. I understand that CPP state is probably not >> easy, but something to consider. > > The problem with this approach is that a header import and a header > include have subtly different semantics around macros. In particular, > the header import does not "see" macros defined by the importer while > the header include does. Here is an example: > > // file: header-unit.hpp > // > #ifdef BAR > #define FOO > #endif > > // file: importer.cpp > // > #define BAR > import "header-unit.hpp";// Should not "see" BAR. > //#include "header-unit.hpp" // Should "see" BAR. > > #ifdef FOO > import "header-unit2.hpp"; > #endif > > In this example, if you treat import of header-unit.hpp as > include, you will get incorrect dependency information. > > So to make this work correctly we will need to re-create the > macro isolation semantics of import for include. > > Even if we manage to do this, there are some implications I > am not sure we will like: the isolated macros will contain > inclusion guards, which means we will keep re-scanning the > same files potentially many many time. Here is an example, > assume each header-unitN.hpp includes or imports : > > // file: importer.cpp > // > import ; // Defined _GLIBCXX_FUNCTIONAL include > > import "header-unit1.hpp"; // Ignores _GLIBCXX_FUNCTIONAL > import "header-unit2.hpp"; // Ditto. > import "header-unit3.hpp"; // Ditto. > import "header-unit4.hpp"; // Ditto. The standard has the concept of an “importable header” which is implementation-defined. We could choose that only headers that are self-contained (i.e. unaffected by external defines) are “importable” (thus the remaining headers would not be eligible for include- translation). That would mean that we could rely on processing any import by processing the header it is created from? Perhaps that is too great a restriction and we need to be more clever…. @ben, in relation to an earlier question: https://eel.is/c++draft/cpp.import#note-4 says that predefined macro names are not introduced by #define and that the implementation is encouraged not to treat them as if they were IIUC, that means that -D/U (and preamble ones) are not emitted into the macro stream - however it might well be the case that they *are* part of the module identifing hash (and preserved as part of the captured command line). Iain
Re: GCC 9.5 Release Candidate available
Hi > On 20 May 2022, at 09:02, Richard Biener via Gcc wrote: > The first release candidate for GCC 9.5 is available from > > https://sourceware.org/pub/gcc/snapshots/9.5.0-RC-20220520/ > > and shortly its mirrors. It has been generated from git commit > 1bc79c506205b6a5db82897340bdebaaf7ada934. > > I have so far bootstrapped and tested the release candidate > on x86_64-suse-linux. I have bootstrapped (using GCC5.4 on darwin9 and GCC7.5 elsewhere) r9-10192 on: i686-darwin9,17 powerpc-darwin9 x86_64-darwin10 to 21. As, expected (since I was not able to find enough time to do the backports), although bootstrap succeeds on darwin21 (macOS 12) the resulting compiler is not really usable. I will have to provide a darwin branch with the necessary changes. One observation outside of this: Several of the testsuite runs hung with cc1 spinning in reload for pr88414. I was not really able to correlate exactly with CPU / tuning chosen. Pretty sure this is a regression - I do not recall the testsuite hanging (for anything other than D) for years. thanks Iain
Re: GCC 9.5 Release Candidate available
Hi Richard, > On 23 May 2022, at 07:27, Richard Biener wrote: > > On Sun, 22 May 2022, Iain Sandoe wrote: > >> Hi >> >>> On 20 May 2022, at 09:02, Richard Biener via Gcc wrote: >> >>> The first release candidate for GCC 9.5 is available from >>> >>> https://sourceware.org/pub/gcc/snapshots/9.5.0-RC-20220520/ >>> >>> and shortly its mirrors. It has been generated from git commit >>> 1bc79c506205b6a5db82897340bdebaaf7ada934. >>> >>> I have so far bootstrapped and tested the release candidate >>> on x86_64-suse-linux. >> >> I have bootstrapped (using GCC5.4 on darwin9 and GCC7.5 elsewhere) r9-10192 >> on: >> i686-darwin9,17 >> powerpc-darwin9 >> x86_64-darwin10 to 21. >> >> As, expected (since I was not able to find enough time to do the backports), >> although bootstrap succeeds on darwin21 (macOS 12) the resulting compiler >> is not really usable. I will have to provide a darwin branch with the >> necessary >> changes. >> >> One observation outside of this: >> >> Several of the testsuite runs hung with cc1 spinning in reload for pr88414. >> I was not really able to correlate exactly with CPU / tuning chosen. >> >> Pretty sure this is a regression - I do not recall the testsuite hanging >> (for anything >> other than D) for years. > > That's gcc.target/i386/pr88414.c? > > Can you be more specific on the target the issue occurs on (so one > can maybe try with a cross?). Bisecting would be most helpful of > course, if it's some of the recent backports reversion would be > most appropriate at this point. several cases, here’s the most modern with an m32 multilib: Configured with: /src-local/gcc-git-9/configure --prefix=/opt/iains/x86_64-apple-darwin17/gcc-9-wip --build=x86_64-apple-darwin17 --with-sysroot=/Library//Developer/CommandLineTools/SDKs/MacOSX.sdk --with-as=/XC/9.4/usr/bin/as --with-ld=/XC/9.4/usr/bin/ld --enable-languages=all CC=x86_64-apple-darwin17-gcc CXX=x86_64-apple-darwin17-g++ log info: [ note the m64 multilib, also emits the diagnostic, but it does not appear to spin. ] /scratch/10-13-his/gcc-9-wip/gcc/xgcc -B/scratch/10-13-his/gcc-9-wip/gcc/ /src-local/gcc-git-9/gcc/testsuite/gcc.target/i386/pr88414.c -m32 -fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers -fdiagnostics-color=never -O1 -ftrapv -S -o pr88414.s /src-local/gcc-git-9/gcc/testsuite/gcc.target/i386/pr88414.c: In function 'foo': /src-local/gcc-git-9/gcc/testsuite/gcc.target/i386/pr88414.c:15:7: error: 'asm' operand has impossible constraints got a INT signal, interrupted by user // (manually killed it) Iain
Re: GCC 9.5 Release Candidate available
> On 23 May 2022, at 07:50, Iain Sandoe wrote: > > Hi Richard, > >> On 23 May 2022, at 07:27, Richard Biener wrote: >> >> On Sun, 22 May 2022, Iain Sandoe wrote: >> >>> Hi >>> >>>> On 20 May 2022, at 09:02, Richard Biener via Gcc wrote: >>> >>>> The first release candidate for GCC 9.5 is available from >>>> >>>> https://sourceware.org/pub/gcc/snapshots/9.5.0-RC-20220520/ >>>> >>>> and shortly its mirrors. It has been generated from git commit >>>> 1bc79c506205b6a5db82897340bdebaaf7ada934. >>>> >>>> I have so far bootstrapped and tested the release candidate >>>> on x86_64-suse-linux. >>> >>> I have bootstrapped (using GCC5.4 on darwin9 and GCC7.5 elsewhere) r9-10192 >>> on: >>> i686-darwin9,17 >>> powerpc-darwin9 >>> x86_64-darwin10 to 21. >>> >>> As, expected (since I was not able to find enough time to do the backports), >>> although bootstrap succeeds on darwin21 (macOS 12) the resulting compiler >>> is not really usable. I will have to provide a darwin branch with the >>> necessary >>> changes. >>> >>> One observation outside of this: >>> >>> Several of the testsuite runs hung with cc1 spinning in reload for pr88414. >>> I was not really able to correlate exactly with CPU / tuning chosen. >>> >>> Pretty sure this is a regression - I do not recall the testsuite hanging >>> (for anything >>> other than D) for years. >> >> That's gcc.target/i386/pr88414.c? >> >> Can you be more specific on the target the issue occurs on (so one >> can maybe try with a cross?). Bisecting would be most helpful of >> course, if it's some of the recent backports reversion would be >> most appropriate at this point. > > several cases, here’s the most modern with an m32 multilib: > > Configured with: /src-local/gcc-git-9/configure > --prefix=/opt/iains/x86_64-apple-darwin17/gcc-9-wip > --build=x86_64-apple-darwin17 > --with-sysroot=/Library//Developer/CommandLineTools/SDKs/MacOSX.sdk > --with-as=/XC/9.4/usr/bin/as --with-ld=/XC/9.4/usr/bin/ld > --enable-languages=all CC=x86_64-apple-darwin17-gcc > CXX=x86_64-apple-darwin17-g++ > > log info: > > [ note the m64 multilib, also emits the diagnostic, but it does not appear to > spin. ] > > /scratch/10-13-his/gcc-9-wip/gcc/xgcc -B/scratch/10-13-his/gcc-9-wip/gcc/ > /src-local/gcc-git-9/gcc/testsuite/gcc.target/i386/pr88414.c -m32 > -fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers > -fdiagnostics-color=never -O1 -ftrapv -S -o pr88414.s > /src-local/gcc-git-9/gcc/testsuite/gcc.target/i386/pr88414.c: In function > 'foo': > /src-local/gcc-git-9/gcc/testsuite/gcc.target/i386/pr88414.c:15:7: error: > 'asm' operand has impossible constraints > got a INT signal, interrupted by user // (manually killed it) Hmm, this is not a regression - (although the problem is real) - I also see it on 9.4 and 9.3 at least, apologies for the noise. Iain
specs question
Hi. My ‘downstream’ have a situation in which they make use of a directory outside of the configured GCC installation - and symlink from there to libraries in the actual install tree. e.g. /foo/bar/lib: libgfortran.dylib -> /gcc/install/path/lib/libgfortran.dylib Now I want to find a way for them to add an embedded runpath that references /foo/bar/lib. I could add a configure option, that does exactly this job - but then I’d have to back port that to every GCC version they are still supporting (not, perhaps, the end of the world but much better avoided). So I was looking at using —with-specs= to add a link-time spec for this: --with-specs='%{!nodefaultrpaths:%{!r:%:version-compare(>= 10.5 mmacosx_version_min= -Wl,-rpath,/foo/bar/lib)}}}’ Which works, fine except for PCH jobs which it breaks because the presence of an option claimed by the linker causes a link job to be created, even though one is not required (similar issue have been seen before). There is this: %{,S:X} substitutes X, if processing a file which will use spec S. so I could then do: --with-specs=‘%{,???:%{!nodefaultrpaths:%{!r:%:version-compare(>= 10.5 mmacosx_version_min= -Wl,-rpath,/foo/bar/lib)’ but, unfortunately, I cannot seem to figure out what ??? should be [I tried ‘l’ (link_spec) ‘link_command’ (*link_command)] …JFTR also tried %{!.h: %{!,c-header: —— any insight would be welcome, usually I muddle through with specs, but this one has me stumped. thanks Iain
Re: GCC 10.4 Release Candidate available from gcc.gnu.org
Hi Jakub, > On 21 Jun 2022, at 12:33, Jakub Jelinek via Gcc wrote: > > The first release candidate for GCC 10.4 is available from > > https://gcc.gnu.org/pub/gcc/snapshots/10.4.0-RC-20220621/ > ftp://gcc.gnu.org/pub/gcc/snapshots/10.4.0-RC-20220621/ > > and shortly its mirrors. It has been generated from git commit > r10-10862-g3c390f4ad27c3d79fd1817276a6d3217fd9bfb51. > > I have so far bootstrapped and tested the release candidate on > x86_64-linux. Please test it and report any issues to bugzilla. > > If all goes well, I'd like to release 10.4 on Tuesday, June 28th. > I bootstrapped and tested on i686, powerpc and x86_64 Darwin from darwin9 to darwin21 (archs as appropriate) - all supported languages - no new issues seen. I also checked that bootstrap worked with Apple gcc-4.2.1 on darwin9, 10 and with clang 3.4 on darwin12 Iain
jit and cross-compilers (use and configuration).
Hi Dave, folks, It seems to me that it is plausible that one could use the JIT in a heterogenous system, e.g. an x86_64-linux-host with some kind of co-processor which is supported as a GCC target (and therefore can be loaded with jit-d code) … but I’m not aware of anyone actually doing this? .. is that use case even reasonable given the current implementation? (I guess there are invocations of the assembler etc. .. I’m not sure if these would work as currently implemented) It’s mildly inconvenient that the build for cross compilers generally fails for me on Darwin (reason 1 below) since I tend to configure by default with —enable-languages=all (and most Darwin platform versions default to host_shared). So I’d like to see what the best way forward is ….. In the short-term there are some issues with the configuration for cross-compilers… 1) the values queried in gcc/jit/Make-lang.in relate to the ‘ld’ that is used for $target not the one used for $host. - this means that if we are on a $host with an non-binutils-ld and building a cross-compiler for a $target that *does* use binutils-ld, the configuration selects flags that do not work so that the build fails. - of course, things might fail more subtly in the case that there were two *different* binutils ld instances. 2) the testsuite obviously does not work. So .. one possibility is to disable jit for cross-compilers, (patch attached) .. … another is to find a way to fix the configuration to pick up suitable values for $host (although I’m not sure how much of that we have readily available, since usually libtool is doing that work). thoughts? cheers Iain 0001-configure-Disable-jit-for-cross-compilers.patch Description: Binary data
[ping] Re: jit and cross-compilers (use and configuration).
Hi Dave, Note: this does cause a build break for cross compilers with —enable-languages=all (if the linkers for host and target have different command line options used in the build) (it is not a serious break, one can exclude jit by manually listing all the other languages) - nevertheless, it would be good to establish if there is a meaningful use-case for libgccjit in a cross-compiler, and if so fix the configuration - or (if no meaningful use-case) exclude it as per the patch. thanks Iain > On 26 Jun 2022, at 14:06, Iain Sandoe wrote: > > Hi Dave, folks, > > It seems to me that it is plausible that one could use the JIT in a > heterogenous system, e.g. an x86_64-linux-host with some kind of co-processor > which is supported as a GCC target (and therefore can be loaded with jit-d > code) … but I’m not aware of anyone actually doing this? > > .. is that use case even reasonable given the current implementation? > (I guess there are invocations of the assembler etc. .. I’m not sure if these > would work as currently implemented) > > > > It’s mildly inconvenient that the build for cross compilers generally fails > for me on Darwin (reason 1 below) since I tend to configure by default with > —enable-languages=all (and most Darwin platform versions default to > host_shared). So I’d like to see what the best way forward is ….. > > > > In the short-term there are some issues with the configuration for > cross-compilers… > > 1) the values queried in gcc/jit/Make-lang.in relate to the ‘ld’ that is used > for $target not the one used for $host. > > - this means that if we are on a $host with an non-binutils-ld and building a > cross-compiler for a $target that *does* use binutils-ld, the configuration > selects flags that do not work so that the build fails. > - of course, things might fail more subtly in the case that there were two > *different* binutils ld instances. > > 2) the testsuite obviously does not work. > > So .. one possibility is to disable jit for cross-compilers, (patch attached) > .. > > … another is to find a way to fix the configuration to pick up suitable > values for $host (although I’m not sure how much of that we have readily > available, since usually libtool is doing that work). > > thoughts? > cheers > Iain > > <0001-configure-Disable-jit-for-cross-compilers.patch>
An odd case with structure field alignment.
Hi, I am clearly missing something here … can someone point out where it is? https://gcc.gnu.org/onlinedocs/gcc-3.3/gcc/Variable-Attributes.html#Variable%20Attributes in the discussion of applying this to structure fields: "The aligned attribute can only increase the alignment; but you can decrease it by specifying packed as well." Consider: struct odd { int * __attribute__((aligned(2))) a; char c; }; I would expect, given reading of the information on the aligned attribute, that the under-alignment of a would be ignored (since there is no packed attribute on either the field or the struct). However, on x86_64, powerpc64 linux and x86_64, powerpc Darwin, I see that the size of the struct is sizeof(pointer) + 2 and the alignment is 2. OTOH: struct OK { int __attribute__((aligned(2))) a; char c; }; behaves as expected (the under-alignment is ignored, silently). as does this… struct maybe { int *a __attribute__((aligned(2))); char c; }; * the type of the pointer does not seem to be relevant (i.e. AFAICT the behaviour is the same for char * etc.) Is there some special rule about pointers that I have not found ? [it’s making an ABI mismatch with clang, which treats the int * as expected from the documentation quoted above] cheers Iain
Re: An odd case with structure field alignment.
> On 5 Sep 2022, at 09:53, Richard Biener via Gcc wrote: > > On Sun, Sep 4, 2022 at 3:33 PM Iain Sandoe wrote: >> >> Hi, >> >> I am clearly missing something here … can someone point out where it is? >> >> https://gcc.gnu.org/onlinedocs/gcc-3.3/gcc/Variable-Attributes.html#Variable%20Attributes >> in the discussion of applying this to structure fields: >> >> "The aligned attribute can only increase the alignment; but you can decrease >> it by specifying packed as well." >> >> Consider: >> >> struct odd { >> int * __attribute__((aligned(2))) a; > > I think this applies the attribute to the type. That was what I wondered - but it does not seem to apply the under-alignment to a non-pointer type ... > For non-aggregate > types 'packed' is ignored. So the above > is equivalent to > > typedef int *A __attribute__((aligned(2))); > > struct odd { > A a; > char c; > }; Which (for the record) works as expected on both compilers. > >> char c; >> }; >> >> I would expect, given reading of the information on the aligned attribute, >> that the under-alignment of a would be ignored (since there is no packed >> attribute on either the field or the struct). >> >> However, on x86_64, powerpc64 linux and x86_64, powerpc Darwin, I see that >> the size of the struct is sizeof(pointer) + 2 and the alignment is 2. >> >> OTOH: >> >> struct OK { >> int __attribute__((aligned(2))) a; >> char c; >> }; However, this does _not_ treat the same sequence as “typedef int A __attribute__((aligned(2)))” >> behaves as expected (the under-alignment is ignored, silently). >> >> as does this… >> >> struct maybe { >> int *a __attribute__((aligned(2))); >> char c; >> }; > > Where for both of these cases the attribute applies to the FIELD_DECL. > The documentation refers to > alignment of fields, not the alignment of types. sure, but I can’t at the moment see a consistent rule to file a bug about. > At least that's my understanding of this issue. > > IIRC clang has issues when matching GCC attribute parsing rules, esp. > when applied to pointer types. probably; when I looked at the decls produced there seemed to be no way to to tell the position of the attribute in the decl (so to decide if it’s a type attr or a field attr). … possibly that means poking at the parser too… attributes in aggregates are fun, for sure .. Iain > > Richard. > >> * the type of the pointer does not seem to be relevant (i.e. AFAICT the >> behaviour is the same for char * etc.) >> >> Is there some special rule about pointers that I have not found ? >> >> [it’s making an ABI mismatch with clang, which treats the int * as expected >> from the documentation quoted above] >> >> cheers >> Iain
Re: Please, really, make `-masm=intel` the default for x86
> On 25 Nov 2022, at 09:11, LIU Hao via Gcc wrote: > > 在 2022/11/25 16:50, Marc Glisse 写道: >> On Fri, 25 Nov 2022, LIU Hao via Gcc wrote: >>> I am a Windows developer and I have been writing x86 and amd64 assembly for >>> more than ten years. One annoying thing about GCC is that, for x86 if I >>> need to write I piece of inline assembly then I have to do it twice: one in >>> AT&T syntax and one in Intel syntax. >> The doc for -masm=dialect says: >> Darwin does not support ‘intel’. >> Assuming that's still true, and even with Mac Intel going away, it doesn't >> help. > > Did you mean 'Darwin' instead of 'macOS'? > > The first-class C and C++ compiler for macOS is Clang anyway; even the thing > named 'gcc' is effectively Clang. Darwin, OS X, PureDarwin, macOS (Intel) all default to AT&T (as does clang on the platform - I am not even sure if Intel syntax is supported for macOS/Darwin there either). .. we can be 100% sure that any Intel syntax support is (at least almost) completely untested. NOTE that the GCC gfortran (and Ada) remain the tools of choice on macOS (and they also need the assembler). It would be pretty difficult to change the default; if GCC changed, I’d only end up patching the Darwin port to default to AT&T .. perhaps a solution for the Windows port is to patch it locally to default to Intel? Iain > > > -- > Best regards, > LIU Hao >
Re: Configuring GCC 10.3 on PPC Mac OS X 10.4.11/Tiger for build reveals problems when removing relics
Hi Pete, > On 25 Nov 2022, at 10:36, Peter Dyballa via Gcc wrote: > On Mac OS X/macOS configure scripts leave conftest.dSYM subdirectories > behind, created by dsymutil: > > checking for build system preprocessor... rm: conftest.dSYM: is a > directory > checking for build system executable suffix... rm: conftest.dSYM: is a > directory > checking whether build system compiler is ANSI... rm: conftest.dSYM: is > a directory > checking for build system compiler math library... rm: conftest.dSYM: > is a directory > > Building GCC 10.3 with MacPorts the configure scripts produce 178 such > reports (and more than 11,000 checking lines without complaint). (The > relation is worse when building smaller software packages.) I agree it’s an irritation (although not a show-stopper, so other things are higher priority right now on my list). > Is it possible to replace the simple "rm" with "rm -r", at least on darwin, > the macOS/Mac OS X? Or create a special macro to be used when dsymutil gets > involved? It is likely to be possible, where the configure tests can be modified in the GCC sources. The best course of action is to take them one by one and see where the configure source comes from, go to that source and modify the rm (hopefully, conditionally on *-*-darwin*) to deal with this. If the source of the problems is primarily libtool.m4 … then we do make local modifications, but now that (libtool) is maintained again we should look into how much we can sync with upstream. I’m happy to review patches (if they are macOS/darwin-specific, then I can even approve them). Iain > > -- > > Greetings > > Pete > > Every instructor assumes that you have nothing else to do except study for > that instructor's course. > – Fourth Law of Applied Terror >
Re: Can't build Ada
Hi Paul, > On 25 Nov 2022, at 20:08, Paul Koning via Gcc wrote: > >> On Nov 25, 2022, at 3:03 PM, Andrew Pinski wrote: >> >> On Fri, Nov 25, 2022 at 11:59 AM Paul Koning via Gcc wrote: >>> >>> I'm trying to use fairly recent GCC sources (the gcc-darwin branch to be >>> precise) to build Ada, starting with the latest (2020) release of Gnat from >>> Adacore. >> >> Are you building a cross compiler or a native compiler? >> If you are building a cross, you need to bootstrap a native compiler first. > > I'm not sure. The installed Gnat is x86_64-darwin; I want to build > aarch64-darwin. you are building a cross then. > But in any case, how does that relate to the error messages I got? They > don't seem to have anything to do with missing compilers, but rather with the > use of language features too new for the available (downloadable) Gnat. Building a cross GNAT requires that the build compiler is from the same sources as the cross - so, as Andrew says, you need to bootstrap the current sources on x86_64 and then use that compiler to build the cross to aarch64. I’m not sure exactly where this constraint is mentioned .. but, nevertheless, it is a constraint. FWIW: I have not done this for a few weeks (using my arm64 prototype branch) but it was working fine then. Iain
Re: Can't build Ada
Hi Paul, > On 25 Nov 2022, at 20:13, Andrew Pinski via Gcc wrote: > > On Fri, Nov 25, 2022 at 12:08 PM Paul Koning wrote: >> >>> On Nov 25, 2022, at 3:03 PM, Andrew Pinski wrote: >>> >>> On Fri, Nov 25, 2022 at 11:59 AM Paul Koning via Gcc >>> wrote: I'm trying to use fairly recent GCC sources (the gcc-darwin branch to be precise) to build Ada, starting with the latest (2020) release of Gnat from Adacore. >>> >>> Are you building a cross compiler or a native compiler? >>> If you are building a cross, you need to bootstrap a native compiler first. >> >> I'm not sure. The installed Gnat is x86_64-darwin; I want to build >> aarch64-darwin. > > You have to build a x86_64-darwin compiler first with the same sources > as you are building for aarch64-darwin. So .. 1/ if you are on arm64 Darwin, - the first step is to bootstrap the compiler using Rosetta 2 and the available x86_64 gnat. 2/ if you are on x86_64 Darwin… - the first step is to bootstrap the compiler using the available x86-64 gnat. then... - then you can build a cross to aarch64 using that just-build compiler. - then you can do a native cross (target==host!=build) using that, which will give you a usable native compiler for arm64 .. (2 is what I was doing all the way through the development - until I recently got an arm64 machine).. I know that Rosetta 2 bootstrap worked a few days ago … BTW: the final step “native cross” can be a bit tricky in terms of configure line - since some configure steps cannot (in general) run the tools on the “foreign” host - so that you might need to specify the linker version (we don’t have the option to do —with-ld64=NN.MM yet, but there is code that cares about the version of ld64.. so) >> But in any case, how does that relate to the error messages I got? They >> don't seem to have anything to do with missing compilers, but rather with >> the use of language features too new for the available (downloadable) Gnat. > > From https://gcc.gnu.org/install/prerequisites.html: > "In order to build a cross compiler, it is strongly recommended to > install the new compiler as native first, and then use it to build the > cross compiler. Other native compiler versions may work but this is > not guaranteed and *will typically fail with hard to understand > compilation errors during the build." > > I added the emphasis but yes this is all documented correctly. thanks for the reminder! cheers Iain
Re: Can't build Ada
Hi Paul, > On 26 Nov 2022, at 15:48, Paul Koning via Gcc wrote: >> On Nov 25, 2022, at 3:46 PM, Iain Sandoe wrote: >> >>> On 25 Nov 2022, at 20:13, Andrew Pinski via Gcc wrote: >>> >>> On Fri, Nov 25, 2022 at 12:08 PM Paul Koning wrote: >>>> >>>>> On Nov 25, 2022, at 3:03 PM, Andrew Pinski wrote: >>>>> >>>>> On Fri, Nov 25, 2022 at 11:59 AM Paul Koning via Gcc >>>>> wrote: >>>>>> >>>>>> I'm trying to use fairly recent GCC sources (the gcc-darwin branch to be >>>>>> precise) to build Ada, starting with the latest (2020) release of Gnat >>>>>> from Adacore. >>>>> >>>>> Are you building a cross compiler or a native compiler? >>>>> If you are building a cross, you need to bootstrap a native compiler >>>>> first. >>>> >>>> I'm not sure. The installed Gnat is x86_64-darwin; I want to build >>>> aarch64-darwin. >>> >>> You have to build a x86_64-darwin compiler first with the same sources >>> as you are building for aarch64-darwin. >> >> So .. >> 1/ if you are on arm64 Darwin, >> - the first step is to bootstrap the compiler using Rosetta 2 and the >> available x86_64 gnat. >> >> 2/ if you are on x86_64 Darwin… >> - the first step is to bootstrap the compiler using the available x86-64 >> gnat. > > Thanks all. > > I tried that (#1) and got the same failure. The trouble seems to be that the > current sources have Ada2020 constructs in them and the available Gnat > doesn't support that version. The commit that introduces these (or some of > them at least) is 91d68769419b from Feb 4, 2022. I am part way through the exercise on both macOS 11 (X86) and 12 (Arm64). ** However, I am using gcc-7.5 as the bootstrap compiler, not gcc-5.1. You might find problems unless you actually start a Rosetta 2 shell - so “ arch -x86_64 bash “ and then go from there (this seems to ensure that sub-processes are started as x86_64) (with this, bootstrap succeeded for both x86_64 Rosetta 2 and rebased Arm64 branch native - r13-4309-g309e2d95e3b9) I will push the rebased arm64 branch when testing is done. > So I'm guessing I'll have to do this in two parts, first build a newer but > not-latest Gnat from a release that doesn't include the problematic > constructs, then follow that by using the intermediate to build the current > sources. > > I wonder if this incompatibility was intentional. If not it would be good > for the Ada maintainers to fix these and ensure that the current code can > still be built with the most recent public release of Gnat. Conversely, if > it is intentional, the documentation should be updated to explain how to > build the current code. The current statement (https://gcc.gnu.org/install/prerequisites.html) is: GNAT In order to build GNAT, the Ada compiler, you need a working GNAT compiler (GCC version 5.1 or later). so, if 5.1 is not working, then perhaps a PR is in order. cheers Iain
Re: Can't build Ada
> On 26 Nov 2022, at 16:42, Arnaud Charlet wrote: > > >>> The current statement (https://gcc.gnu.org/install/prerequisites.html) is: >>> >>> GNAT >>> In order to build GNAT, the Ada compiler, you need a working GNAT compiler >>> (GCC version 5.1 or later). >>> >>> so, if 5.1 is not working, then perhaps a PR is in order. >> >> I will do that, if the "shell in Rosetta" thing doesn't cure the problem. > > You won’t need to, the version of gnat you are using is recent enough, you > need to follow Ian’s instructions to the letter. The Ada 2022 code is a red > herring and is only problematic when you build a cross with a non matching > native, not when building a native compiler. One additional question/point - which branch are you trying to build the cross from? I am sure it will not work from upstream master. Unfortunately, owing to lack of free time… aarch64-darwin is not yet completely ready to upstream, so folks are using the development branch here: https://github.com/iains/gcc-darwin-arm64 (which I will update later, based on the master version mentioned earlier; if testing goes OK). Iain.