> Thanks for your reply. Regarding scheme code: it seems to me that the 
> simplest solution would be to modify event-listener.ly so that instead of 
> storing point-and-click data (row and column within the lily file), it 
> instead stores the pixel coordinate X, Y of the note’s (or rest’s) glyph 
> within the output PNG image. I have tried that, but so far my efforts are not 
> successful, due to my weak understanding of the language and LilyPond’s data 
> architecture.
>
> I would happily accept a “cookbook” solution for the above, as this would 
> spare me countless hours of possibly futile effort.

Any reason not to reuse the code I linked inside ly2video as a cookbook 
solution? Anything implementing this sort of functionality is basically bound 
to use the same kind of technique. Note that you cannot extend 
event-listener.ly to output coordinates because event listeners in engravers 
are run during the translation process where LilyPond converts the music to a 
net of graphical objects, long before those objects are placed on the page. It 
may help to read 
[this](https://extending-lilypond.gitlab.io/en/extending/intro.html#overview-of-lilypond-s-inner-workings-and-how-you-might-hook-in-them)
 if you want to understand the different phases of compilation. You want grob 
callbacks, which is what that code does.


> Re MusicXML, can you recommend a FOSS MIDI to MusicXML converter that runs on 
> Windows?

Have you tried MuseScore's import then export? Last time I looked, this seemed 
to be the most accurate.

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to