On 8/24/11 6:00 PM, "Kieren MacMillan" <kieren_macmil...@sympatico.ca> wrote:
> Hi Carl, > > > My question is this: In what format is the final, typeset music stream such > that extracting the music information only would be massively easier than > extracting the music and layout information? > I don't believe there *is* a final, typeset music stream. There is an input .ly code stream, which is converted to a stream-event stream. The stream-event stream generates a set of grobs. The grobs generate stencils. The stencils are printed on the page. IIUC, grobs have information about their cause, but stencils do not. And there is not a one-to-one correspondence between stencils and music events. For example, a chord made of three dotted quarter notes will generate three note-head stencils, one stem stencil, and one dots stencil. But as I read it, the XML would require three note objects, each having its own dot attribute. And the only layout information for the dot is whether the dot should be above or below the staff line. Perhaps it's possible to merge these two distinct views. But I think that Reinhold is exactly right, and that the only way to do it extensibly is with XML performers that will take stream events and convert them to XML. But how do we synchronize the performers and the engravers (which are setting things up to make the layout decisions)? That's the part I don't see right now. Thanks, Carl _______________________________________________ lilypond-user mailing list lilypond-user@gnu.org https://lists.gnu.org/mailman/listinfo/lilypond-user