David Kastrup <d...@gnu.org> writes: > Han-Wen Nienhuys <hanw...@gmail.com> writes: > >> I tried what happens if one concats all texi/tely files together, and >> runs lp-book on them. >> >> The result is 25 minutes of purely CPU bound grinding (this is with >> Guile 2.2). Then building the remaining docs takes about 15 minutes. >> In this last phase, there is some inefficiency: we process the >> documents per language directory, but for each directory there is a >> bunch of small files, and a humongous notation manual. We can't move >> on to the next directory until the notation manual PDF file finishes. > > Where would the point be in moving on to the next directory when > CPU_COUNT processors are already working on the notation manual? > > The startup time of .scm files is not a lot of trouble with respect to > the spent user time since LilyPond forks into separate processes _after_ > already having started up. While it starts up under control of > LilyPond-book, it does so with a single CPU: I'll admit that. But if we > are going to end up paying for CPU time, I prefer that we don't have > independent processes all starting up on their own CPU. That would take > about the same real time but quite more user time. > >> In order to fix this, we would have to reorganize the build system so >> it builds everything out of one directory. > > Or basically have a jobserver structure for LilyPond-book. We don't > need to flatten the directory structure but just the organisation of the > workload. > > That will buy us smaller realtime. It will not buy us significantly > smaller billable user time. > > So its main payoff would be if we keep doing the bulk of our > test/integration work on private computers rather than virtualised CPUs.
Actually, keeping a pre-initialized process of LilyPond around which forks off work processes on-demand that can immediately do actual work would likely make Johannes Feulner (Scorio) happy. That would be more a LilyPond job server than a LilyPond-book job server, but it's not like you couldn't funnel one through the other. -- David Kastrup