On 21 déc. 2012, at 12:17, David Kastrup <d...@gnu.org> wrote: > "m...@mikesolomon.org" <m...@mikesolomon.org> writes: > >> I agree that moment addition and subtraction is a royal pain. If I >> tackled this, which I might, the solution would be more general in the >> form of what I call "stream filters." > > I'd strongly recommend against that: the only reliable way to establish > timing is actual iteration, and that's costly (well, one can abort it > once time 0 is over, but it still needs to get set up). Short of that, > we'll just move the problem into more obscurity and get new surprises > when the pseudo-timekeeping gets things wrong. > > Now we actually have this sort of "squeeze grace material in before the > time starts properly" mechanism already, with grace-fixup chains and > stuff. Prepending _another_ such mechanism because the existing > mechanism fails to work reliably seems like a step towards making it > much harder to actually fix things.
I agree that it'll take more time, but I don't think there's anything pseudo about the timekeeping. I used filters like this for many, many projects and the first pass is attaching a property called timing-info to every event that identifies at what time point it falls in the piece. That allows subsequent iterations to not have to figure out timing stuff. This is also the type of info that'd also be necessary for a MusicXML export. My goal is not to prepend an extra mechanism because the current one fails. My goal is to come up with a systematic way to automate choices before information gets passed to the engravers. The most problematic engravers, in my opinion, are currently the ones that try to do this type of automation task: Auto_beam, Completion_note, and Completion_rest. This is because they cannot peak ahead in time and go backwards, resulting in their either making poor engraving choices or issuing events later in time, which causes problems for the identification of cross-staff grobs. The solution is to come up with a stable Scheme object - MusicFilter (or whatever) - and chain these things together, passing the input stream through them before the engraver stage. I don't think coming up with a general and robust implementation of music stream filters would make it harder to fix things, nor does it move the problem to more obscurity. On the contrary, I think it'd make it easier and more open. For example, if these Filter objects could issue warnings and point to lines in code where problems cropped up, that'd make it easier to find and fix problems. I'll conclude with a variety of tasks that can be done at this stage: --) Auto beaming --) Making decisions about splitting notes and rests at barlines --) Style sheets for how notes should be dotted or tied over beats --) Typesetting only parts of music --) Moving keys, time signatures and the like before or after grace moments In general, anything that requires automation should be done before the engraver stage. Coming up with a stable and reliable way to do this is, in my opinion, a step forwards for LilyPond. Cheers, MS _______________________________________________ lilypond-user mailing list lilypond-user@gnu.org https://lists.gnu.org/mailman/listinfo/lilypond-user