On 13.01.2018 00:07, Joseph Myers wrote:
On Fri, 12 Jan 2018, Jeff Law wrote:

I was going to suggest deprecation for gcc-8 given how badly it was
broken in gcc-7 and the lack of maintenance on the target.

While we're considering deprecations, what happened to the idea of setting
a timescale by which cc0 targets need to be converted away from cc0 or be
removed?

I still don't quite get why cc0 is considered to be so bad. Just the fact that the big targets don't use it doesn't make it useless.

Yes, I know that CCmode can represent condition code. But just the fact that it can represent it doesn't make it superior or cc0 inferior or bad. Having different representations for the same thing has also its obvious upsides (think of different representations in maths or physics), and in the present case one has the choice between an explicit RTL representation and an implicit (w.r.t. to RTL) one.

The target I am familiar with is avr, and for that particular target, I cannot see a single benefit:

- cc0 does a good job and did always a good job in the past. In the years I contributed to avr, there hasn't been a single cc0 flaw (all the few, minor cc0-related issues were avr BE issues). From my experience, if some middle-end bits flaw for a what-the-dickens-is-xxx-target, then that target is left behind and has to hack the backend to surmount the middle-end flaw (like for reload or all the other FIXMEs you'll find). I wouldn't expect that anyone would fix CCmode shortcomings if some of these targets had trouble with it. And IIUC one of the posts from above, m32c is considered to be kicked out because of reload bugs.

- No improvements of generated code are expected. avr won't benefit from separating comparisons from branches (no scheduling).

- Likewise for insn splitting: If the avr port could benefit more from splitting, then such splitting would have to be performed before register allocation -- which is not supported as LRA is restricted to cases that don't clobber CC (which is not possible for avr), and reload will also die soon IIUC. Hence any CCmode implementation for avr will jump in after RA, even if reload might still be in use.

- Currently the best instruction sequence is printed during asm out. Often, there is a multitude of different sequences that could perform the same task, and the best (usually w.r.t. length) is chosen ad doc. The cc0 setting and insn length computation follows the sequence, i.e. there are implicit assertions between printed code, insn lengths and cc0 effect. All this is in a single place, cf. avr_out_plus all its callees like avr_out_plus_1. Usually, it is not possible to describe the conditions for specific sequences on RTL or constraint level (would need to filter depending on cartesian product over all constraints, and this is not possible because not allowed in insn conditions).

Switching to CCmode would basically require to throw avr into the dust bin and start the backend from scratch, at least considerable parts of c and md. Even if switching to a clobbers-all solution is feasible, this will result in performance drop, and fixing that will likely be no easy task (if possible with a reasonable effort at all) and greatly destabilize the backend.

Actually, LLVM has an evolving avr backend. It's far from being mature and less stable and optimizing than gcc's. But instead of putting work into a dead horse (at least re. avr) like gcc with strong tendencies of self-destruction (which removing cc0 is, IMO), it seems much more reasonable to switch to an infrastructure that's more friendly to such architectures and not in the decline.

In my option, one of the great upsides of GCC is that it supports so many targets, many deeply embedded, mature targets amongst them. With that background, it may very well make sense to have considerably different approaches to the same problem. And most of cc0 code is in final and a few parts that keep comparisons and branches together. So kicking out cc0 doesn't seem to bring a maintenance boost, either...

Johann

Reply via email to