On Mar 20, 2007, at 8:13 PM, Simon Brenner wrote:
Wow, lots of comments there, Mike ;-)
I could say a lot more... I thought I'd let you drag any other details you wanted out of me. :-)
My idea was to initially just check for any not obviously safe changes, and later in the project try to determine the most important kind of changes to handle intelligently. That is, a change would be considered dangerous until proven safe. But I guess I might find that almost nothing at all can be done without that fine-grained checking.
We went the other way, assumed safe, then, test out things we know are dangerous and put in code to handle it, and build projects and find things that needed more work.
Would it be possible to calculate the dependencies from the tree of a function?
Yes, if you annotate the tree with all the information necessary to do this.
In the worst case, you'd just keep on going naively, crash, and let the build script retry a non-incremental compile.
We did this as well, restart the server on crash and then retry.
Which of these are only of importance when doing code generation?
Of that list, RTX constant pools, debugging information, exception tables, I'd expect.
How good did you manage to get the compile server in terms of fidelity?
One medium C++ app worked for one port, though that might have been around 2% of all the features. For example, I have to be able to have machine ports be able to replay state changes, I did some of those for one OS, one target. gcc has many such configurations. If 50, I hardly did 1/50 of all the work with the one port.
I'm happy to forego the extra gains
That would avoid one problem we faced.
Trying to do all these things at once is probably a contributing factor to the compile server not being in the mainline.
Agreed.
Anyways, how far did the compile server go?
One medium C++ project (Finder_FE) on one platform.
What was left to do when development stopped?
I think it boils down to just more language fidelity. Other things could be done, like, distribute to other hosts, ensure we don't run out of ram or vm, and replay support for all the rest of the ports, but, I don't think these would need to have been done before the first checkin to mainline with the feature off by default.
And why did development stop?
I'd guess that if you asked three people, you'd get three answers. :-) I think it boils down to death by branch. Just too much work left to do, no way to deliver a release quality compiler without spending the time doing all the rest of the work. If we choose the design point of keeping perfect language fidelity at all times, and could check in to mainline as it was developed without having to spend three weeks for each 5 line change, we might have been able to eek out a compiler that was significantly faster by now.
We're still interested in advanced technologies to get a compiler that is 2-4x faster. Would like to see buy in for how we get that and in what direction we should all move to get it, set and agreed upon for the project. The people can work against it as they can, and hopefully in the end, we'd get were we want to go.