Absolutely! At the moment I am setting up for a couple projects: one is a collection of guitar music (hence the other thread, it's not going to be super large, but I want it look as beautiful as I can). the other is more a scale out kinda thing (I'm seeing if I can help put together a biggish collection of jazz sheets, like a real book, based on the openbook corpus. I have a template based on some of Abraham Lee's work and I'm working on infrastructure to assemble the collection in a way to be as flexible as possible. Also infrastructure and apparata to be generated (possibly in TeX through lilypond-book?) for things like author lists, genre lists and such. Lots for me to learn. Atm I'm stuck trying to use \bookpart from inside a scheme procedure and it's not going too hot I must say.
Once I'm done with these, if there's still interest, I could see if I can help with this stuff. I like parsey things fwiw. The idea of parsing lisp was because I was imagining you could scrape the .scm source files to build a database of callables and their signatures and then use that to guide highlighting examples found in the docs. Wasn't aware of your other script L On Mon, 21 Feb 2022, 19:57 Jean Abou Samra, <j...@abou-samra.fr> wrote: > Le 21/02/2022 à 19:17, Luca Fascione a écrit : > > I haven't worked wirh TexInfo markup before, however it occurs to me > > that lisp is regular enough that with some effort one could hope to > > scrape out a majority of the function definitions > > and then use such a database to touch up the help source? > > > > Not sure I understand the link with Scheme/Lisp, but > if you want such an autogenerated database, you can grab > this script: > > > https://github.com/pygments/pygments/blob/master/external/lilypond-builtins-generator.ly > > It's the source of the lists of builtins in Pygments. > (I hope to integrate some form of it in core LilyPond > at some point so other tools like Frescobaldi could use > it as well, but I have been too busy lately). > > > > > > Like if you imagine a strategy like this: > > - scrape out what you can with a script (targeting to find 90% or so > > of what's there) > > - add an exception list hand-curated (which mops up the rest) > > - use this stuff to find and 'parse' the contents of the help so that > > you can then transform it into something else > > this could give you some 90-95% of the source revised. > > - mop up again the result by hand > > > > If this were a one-off affair, it could be a way to go, > > it sounds more painful that it often ends up being, the key being to > > find a good balance > > between how robust your scrapers are wrt how much manual effort is to > > go back and mop things up. > > > > I know the docs for lilypond are a huge set, and I'm not sure how > > translations are implemented. > > I'm not suggesting now it's a good time to do this, however if one > > were to consider such a thing, this seems like it could be a way to do > it, > > purely because Lisp-y things are easy to parse, which makes them > > relatively robust to detecting decorations such as @var{} > > > > Unlike its fellow extension language, LilyPond is not > easy to parse *at all* (just glance at lily/parser.yy), > but it is true that Texinfo is easy to parse. I'm not sure > how robust such a script could be, only experience can tell. > > Well, and we're all volunteers here. Feel free to work on > it :-) (Especially since it's a task that can be done with > little prior knowledge of the code base). > > Cheers, > Jean > > > > > I've used pygmentize in other projects and it can look quite > > beautiful, once you get it going. > > I like how it's able to provide a unified look to a number of > > different languages, making the final result > > look consistent while making it clear what language is what. > > (I've done a fair bit of LaTeX over the years) > > > > Luca > > > >