Werner LEMBERG <w...@gnu.org> writes: > I was quite impressed seeing the following video. > > https://www.facebook.com/verge/videos/975966512439692/ > > If writing music like this *really* works, I could imagine that even I > would use a computer instead of writing on paper...
Well, the video profits a lot from cutting, with average scene lengths of few seconds and mostly single elements... It's obvious that there is significant selection of menu options going on between the scenes and since we don't get a live presentation, it's anybody's guess just how high the actual recognition rate is (as with anything based on handwriting or speech or music recognition). People working with Midi input methods for LilyPond or any other of the mentioned recognition technologies know that _correcting_ a 90% result may easily take up as much time as entering it manually in the first place. Now actually this StaffPad looks like a good fit for the _correction_ pass as the graphical input method would seem to make jumping between problematic locations and addition of articulations (very hard to determine from Midi input) reasonably straightforward, and navigation is probably the most time-consuming element of a correction pass, whereas the initial entry pass is still likely done best linearly with an actual instrument. However, what we haven't seen in the video is correction of overall mistiming, and that's likely the most troublesome part of cleaning up results derived from actual Midi input. Music derived from scanned sheet music is likely a better starting point for a graphical correction pass. -- David Kastrup _______________________________________________ lilypond-devel mailing list lilypond-devel@gnu.org https://lists.gnu.org/mailman/listinfo/lilypond-devel