Re: LilyPond 2.23.6 released
Hi Thomas, maybe this can be handy: the `moreutils` package has a utility called `ts`, that will prepend a timestamp to each line of output. If you pipe the output of your compilation into it, you can get timing information quite easily, here's an example: % ls | ts -s "%H:%M:%.S" 00:00:00.13 _config 00:00:00.000546 myfavoritethings_ob-cm.pdf 00:00:00.000567 myfavoritethings_ob-em.pdf 00:00:00.000580 openbook_v1-1.pdf 00:00:00.000591 openbook_v1.ly 00:00:00.000602 openbook_v1.pdf 00:00:00.000613 src Here I'm using "-s" to mean "time relative to the start time", and "%.S" in the format to mean "print out subsecond timestamps". (As your runs are sub-minute, you can probably use "%M:%.S" if you feel like saving some 0's on the left) You can capture stderr also adding `2>&1` before the pipe, like this: " 2>&1 | ts ..." The moreutils package is installed in Linux distributions but doesn't appear to be there by default on MacOS, our friends on stackexchange indicate that on brew you'd use `brew install moreutils` to get the package (I just tried this, it worked fine for me), on macports I think it's something like `sudo port install moreutils` (but I don't use macports, so I haven't tested this one). HTH, Luca On Sun, Feb 20, 2022 at 9:26 AM Thomas Scharkowski < t.scharkow...@t-online.de> wrote: > Quick answer: I can notice the speed difference in the terminal output > from the very start. > I will try the diff later. > Thomas > > > Am 19.02.2022 um 20:12 schrieb Jean Abou Samra : > > > > Le 19/02/2022 à 16:41, Thomas Scharkowski a écrit : > >> One more test: > >> I installed both versions on a 2013 iMac intel Core i5. > >> > >> 24,3“ MacPorts version > >> 51,4“ gitab version > > > > > > OK, bear with me. Can you please take the following steps > > and report results? It is really important for us to identify > > what is causing this slowdown. > > > > First, does it look like one step of the compilation in particular > > is slow (for example, it takes time during "Preprocessing graphical > > objects"), or do all steps take more time? Pro tip: run lilypond with > > --verbose to have a more fine-grained view of the process. > > > > Second, edit the file lily.scm (should be under > > /path/to/lilypond-2.23.6/share/lilypond/2.23.6/scm/lily) > > to apply the following diff: > > > > > > @@ -844,8 +844,11 @@ PIDs or the number of the process." > > > > > > > > +(use-modules (statprof)) > > + > > (define-public (lilypond-main files) > >"Entry point for LilyPond." > > + (statprof (lambda () > >(eval-string (ly:command-line-code)) > >(if (ly:get-option 'help) > >(begin (ly:option-usage) > > @@ -927,7 +930,7 @@ PIDs or the number of the process." > > (ly:exit 1 #f)) > > (begin > >(ly:exit 0 #f) > > - > > +)) > > (define-public (session-start-record) > >(for-each (lambda (v) > > ;; import all public session variables natively into parser > > > > > > > > Namely, add "(use-modules (statprof))" before the definition of > > lilypond-main, "(statprof (lambda ()" after the line "Entry point > > for LilyPond", and "))" after the function. This makes LilyPond > > run under a profiling tool and report output at the end of the > > compilation. Now run once with GUILE_AUTO_COMPILE=1, to recompile > > lily.scm. The second run will be clean (all files byte-compiled); > > can you report the big list of timings that is printed at the end? > > > > Thanks in advance, > > Jean > > > > >
Re: Setting up classical guitar fingerings
So... would anybody be able to lend a hand here please? Many thanks Luca On Sat, Feb 12, 2022 at 7:49 PM Luca Fascione wrote: > Hello, > sorry for the double-post, I'm unsure whether this should go to -user or > -devel. > > I'm looking for some guidance to set up fingering on classical guitar > sheets. > > I am attaching a simple piece of music, with two engraving sets (measures > 1-5 and 6-10), one "as-is" from lilypond, the other using some trickery > involving one-note chords, purely to show a sample of what the result I'm > after (and it's an approximation), vs what I get at the moment. > > Measures 1-5 in the source look like what I am intending to type, but it > has a number of engraving defects I don't understand (you can see the beams > don't avoid the fingerings, nor they are located correctly wrt the > accidentals, the second beat of measure 5 illustrates this well. I'm not > super in love with measure 10 either, but if I understand the docs > correctly, the issue there is that the 'offset' correction is applied > post-layout, and so naturally it won't back-affect the placement of the > beams. > > I have made several other experiments, I'm just not wanting to waste > people's time. But setting Fingering.side-axis = #X seems somewhat > promising, but it seems unable to find any usable Y data about the parents, > and smashes all numbers on the B line, as well as not dealing with > accidentals. > > I have an engraving project in front of me, for which I'm more than happy > to put in the time to contribute the code to a proper solution myself, and > I really don't want to make poor use of time from folks busy with other > work, but I feel I'll need some level of guidance as to what to do. For > context I can do C++ and I can manage guile ok (I'm a software engineer for > work, I'm mostly working in the field of computer graphics). > > I was looking into this problem several years ago also, and Han-Wen > Nienhuys at the time suggested I should use a positioning callback attached > to the Fingering grobs, but I couldn't find a way to do such a thing (in > particular I can't find what property to use for this). So far I've traced > the Fingering system to be an instance of the Articulations/Scripts system, > but that's as far as I got. > > It seems to me what's needed would be to decide where the heads go, then > the accidentals, at this stage deal with the fingering and only then there > would be enough bboxes to reason about the beaming (this is the skyline > concept I think). In reasoning about how Articulations are engraved, it's > possible the order of events for fingering would be different from the > order of events in other articulations (which I think are laid out after > beams are in place, if I am not mistaken), warranting a bigger change, but > I have no idea where that is located/managed. > > Many thanks for your time, > Luca >
Re: Setting up classical guitar fingerings
Thanks Jean, I thought a somewhat more complete example of the configurations I'm looking at would help get a sense of the scope of the problem, and also that the solution would be an easy "do this" or "look here" kind of answer. My concern with a tiny example is that it risks to create a rather longer chain of emails about "what about this case, and that case" which can then turn into a "why didn't you tell me before you needed all these things" kind of discussion. I'd have thought that is also poor use of people's time, you see. That being said, I really appreciate you spending the time to do this, I'll try to keep my examples shorter in my future questions. And please, just tell me and I'll do the legwork of trimming things down, I completely agree with you that your time can and should be put to better use than deleting my source. Going back to the example itself, there are two questions in flight here: a) I'm looking for a way to get the fingerings where I want them without using one-note-chord tricks b) I'm also looking to position fingering a bit more finely than what the second piece shows (as you can see, while some fingerings look passable, others are not in a well placed location, for various reasons): referring to the longer example in the pdf, in general it's clear that the beaming layout is not taking into account the fingerings, say bar 6, the high e-0 or the slur in bar 9: the beams should automatically set a bit looser than they are to make it all breathe more openly, at least to my taste. Another example of a different kind is the eis-3 in bar 10: the 3 sits awkwardly in the f-space which makes it hard to read, I feel nudging it elsewhere would make it easier to see (in this case I'd probably pick a half-space up). I feel there's some overlap in the two answers though (fingerings need to deal with accidentals, for example). For a) I've reworked your example into what below \version "2.22.1" \layout { \context { \Voice \override Fingering.X-offset = #0.5 \override Fingering.parent-alignment-X = #-1 %%\override Fingering.self-alignment-X = #1 %%\override Fingering.self-alignment-Y = #-1 %%\override Fingering.side-axis = #X \override Fingering.staff-padding = #'() \override Fingering.add-stem-support = ##f } } % this is what I want to type target = \relative { c'16-3 d-3 e-3 g-3 << { c,16 e'32 g-4 g16 b } \\ { d,,,8.-0 dis''16-3 } >> } % this is something that currently gets some of the way there reference = \relative { 16 << { c,16 e'32 g16 b } \\ { 8. 16 } >> } { \target \set fingeringOrientations = #'(left) \override Staff.Fingering.extra-offset = #'(0.125 . 0.5) \reference } As I said, I'm more than happy to write code, I'm not necessarily looking for a "simple" solution. If the answer is: "it's a big change involving steps a to f", I'm happy to have at it, under somebody's guidance. Given that solving this problem is a need of mine, I feel it's completely fine that it ends up being my cost to fix it, all I'm looking for is a few breadcrumbs. Thanks again, L On Sun, Feb 20, 2022 at 9:54 PM Jean Abou Samra wrote: > Le 20/02/2022 à 21:17, Luca Fascione a écrit : > > So... would anybody be able to lend a hand here please? > > > > Many thanks > > Luca > > > > It would be helpful if you provided smaller examples. > I'm not saying this as a reprimand, but as friendly > advice on how to get people to help you. Personally, > I had started experimenting with the problem when you > first asked, but since understanding all the code already > took me too much of the limited time I can spend for > answering questions on -user, I stopped at that point. > Here is what I would give as an example: > > > \version "2.22.1" > > \layout { >\context { > \Voice > \override Fingering.X-offset = #0.5 > \override Fingering.parent-alignment-X = #-1 > %%\override Fingering.self-alignment-X = #1 > %%\override Fingering.self-alignment-Y = #-1 > %%\override Fingering.side-axis = #X > \override Fingering.staff-padding = #'() > \override Fingering.add-stem-support = ##f >} > } > > > music = \relative { >16 ><< > { >c,16 >e'32 >g16 b > } > \\ > { >8. >16 > } >>> > } > > { >\music >\set fingeringOrientations = #'(left) >\override Staff.Fingering.extra-offset = #'(0.125 . 0.5) >\music > } > > > That's longer than most examples posted on this list, but > much shorter than the original. It probably doesn't make > any sense musically, but that is not the point. It encompasses > (I think) all of the problems raised by the original, > and can be grasped much quicker. > > Doing that enough time has passed that I already need to > be doing something else, so I will look into solutions > later :-) > > All the best, > Jean > > >
Re: Setting up classical guitar fingerings
Hi Valentin, thank you this is super interesting. There's a lot of information in there I want to read more carefully, but for the moment I have one question: when is after-line-breaking invoke? Or actually, better question: where do I go to discover when (and I guess by what) after-line-breaking is invoked? Another thing at the moment I don't follow is the 'engraver' variable in the scheme engraver you wrote: where does that come from? (I suspect it's some kind of name available where the engraver is invoked, but again: how would I go at discovering this?) Many thanks, this is very helpful Luca On Sun, Feb 20, 2022 at 11:07 PM Valentin Petzel wrote: > Hello, > > our problem here is that such things like the positioning of beams are not > known for quite some time. But we could use something like > after-line-breaking > to adjust the results. Somewhat like here. > > Valentin > > Am Sonntag, 20. Februar 2022, 21:17:31 CET schrieb Luca Fascione: > > So... would anybody be able to lend a hand here please? > > > > Many thanks > > Luca > > > > On Sat, Feb 12, 2022 at 7:49 PM Luca Fascione > wrote: > > > Hello, > > > sorry for the double-post, I'm unsure whether this should go to -user > or > > > -devel. > > > > > > I'm looking for some guidance to set up fingering on classical guitar > > > sheets. > > > > > > I am attaching a simple piece of music, with two engraving sets > (measures > > > 1-5 and 6-10), one "as-is" from lilypond, the other using some trickery > > > involving one-note chords, purely to show a sample of what the result > I'm > > > after (and it's an approximation), vs what I get at the moment. > > > > > > Measures 1-5 in the source look like what I am intending to type, but > it > > > has a number of engraving defects I don't understand (you can see the > > > beams > > > don't avoid the fingerings, nor they are located correctly wrt the > > > accidentals, the second beat of measure 5 illustrates this well. I'm > not > > > super in love with measure 10 either, but if I understand the docs > > > correctly, the issue there is that the 'offset' correction is applied > > > post-layout, and so naturally it won't back-affect the placement of the > > > beams. > > > > > > I have made several other experiments, I'm just not wanting to waste > > > people's time. But setting Fingering.side-axis = #X seems somewhat > > > promising, but it seems unable to find any usable Y data about the > > > parents, > > > and smashes all numbers on the B line, as well as not dealing with > > > accidentals. > > > > > > I have an engraving project in front of me, for which I'm more than > happy > > > to put in the time to contribute the code to a proper solution myself, > and > > > I really don't want to make poor use of time from folks busy with other > > > work, but I feel I'll need some level of guidance as to what to do. For > > > context I can do C++ and I can manage guile ok (I'm a software engineer > > > for > > > work, I'm mostly working in the field of computer graphics). > > > > > > I was looking into this problem several years ago also, and Han-Wen > > > Nienhuys at the time suggested I should use a positioning callback > > > attached > > > to the Fingering grobs, but I couldn't find a way to do such a thing > (in > > > particular I can't find what property to use for this). So far I've > traced > > > the Fingering system to be an instance of the Articulations/Scripts > > > system, > > > but that's as far as I got. > > > > > > It seems to me what's needed would be to decide where the heads go, > then > > > the accidentals, at this stage deal with the fingering and only then > there > > > would be enough bboxes to reason about the beaming (this is the skyline > > > concept I think). In reasoning about how Articulations are engraved, > it's > > > possible the order of events for fingering would be different from the > > > order of events in other articulations (which I think are laid out > after > > > beams are in place, if I am not mistaken), warranting a bigger change, > but > > > I have no idea where that is located/managed. > > > > > > Many thanks for your time, > > > Luca > >
Re: Setting up classical guitar fingerings
Hi Thomas, thanks for your comment, this helps me refine my understanding of what's going on. At the same time, while I do see that for other articulations (fermata, appoggiato) this parenting scheme works very well, I remain wondering whether for the style of layout of the fingering indications that I am after, the appropriate thing to do could be to change the parenting altogether. If we look at chord for a second, I see the thing as a trick because to me even for proper chords the whole FingeringColumn idea is also a weird concept: imagine you're in say C major, and you're laying out fingering on the left of a chord like Fm : I'm very unclear whether the most readable solution is to have the fingerings stacked one atop each other in a column (thereby more distant from f and c because of the intervening flat on the aes) or if instead the fingerings on f and c should be set tighter to their corresponding note heads and just the aes fingering be displaced left horizontally, to allow for the flat. I would like to experiment with various possibilities there, visually. I suppose you could still displace horizontally inside the column, and then push it all inwards closer to the chord even if the bboxes will overlap a bit... I anticipate issues such as making sure the fingering for c' doesn't interfer with the ascender on the flat glyph, also. Which brings me to a question: what consequence would it have to replace the X-parent and Y-parent of the fingering to be the NoteHead instead? (I guess there will be a need to deal with the accidentals at a minimum) And also: how would I go at discovering these consequences without using too much of you guys' time? Thanks again, Luca On Mon, Feb 21, 2022 at 1:22 AM Thomas Morley wrote: > Am So., 20. Feb. 2022 um 22:41 Uhr schrieb Luca Fascione < > l.fasci...@gmail.com>: > > > a) I'm looking for a way to get the fingerings where I want them without > > using one-note-chord tricks > > Well, for Fingerings not in chord, like b-1 or -2-1 X-parent > is NoteColumn _not_ NoteHead, Y-parent is VerticalAxisGroup. > There is no direct way from NoteHead to Fingering and vice versa. > > Thus putting Fingering in-chord is unavoidable, imho, even for single > notes. > It is _not_ a trick, but a requirement. > > Furthermore, you say you set music for classical guitar, then chords > will happen anyway, although not in your example. > Please note, as soon as more than one in-chord Fingering is present a > FingeringColumn is created. Which will make things even more > complicated. > See > https://gitlab.com/lilypond/lilypond/-/issues/6125 > https://gitlab.com/lilypond/lilypond/-/merge_requests/732 > > Sorry to be of not more help, > Harm >
Re: Setting up classical guitar fingerings
But wouldn't you finger that as ? (Didn't check the number, I'm just meaning going infix vs postfix) I can see that this idea of mine does have issues for fingering your way around (which seems to me it's more of a fingering atop thing, like you would have in a keyboard score) L On Mon, 21 Feb 2022, 12:32 Valentin Petzel, wrote: > Hello Luca, > > changing the X-parent to the NoteHead would mean that we are aligning the > Fingering horizontally wrt. the NoteHead instead of the whole NoteColumn. > This > would then mean that if for example due to some chord some note heads are > on > the other side of the Stem the alignment of something like -1-2-3 > would > change (disregarding that it wouldn’t even be clear what note head to use). > > Cheers, > Valentin > > Am Montag, 21. Februar 2022, 09:19:30 CET schrieb Luca Fascione: > > Hi Thomas, > > thanks for your comment, this helps me refine my understanding of what's > > going on. > > > > At the same time, while I do see that for other articulations (fermata, > > appoggiato) this parenting scheme works very well, > > I remain wondering whether for the style of layout of the fingering > > indications that I am after, the appropriate thing to do could be to > change > > the parenting altogether. > > > > If we look at chord for a second, I see the thing as a > > trick because to me even for proper chords the whole FingeringColumn idea > > is also a weird concept: imagine you're in say C major, and you're laying > > out fingering on the left of a chord like Fm : I'm very unclear > > whether the most readable solution is to have the fingerings stacked one > > atop each other in a column (thereby more distant from f and c because of > > the intervening flat on the aes) or if instead the fingerings on f and c > > should be set tighter to their corresponding note heads and just the aes > > fingering be displaced left horizontally, to allow for the flat. I would > > like to experiment with various possibilities there, visually. I suppose > > you could still displace horizontally inside the column, and then push it > > all inwards closer to the chord even if the bboxes will overlap a bit... > I > > anticipate issues such as making sure the fingering for c' doesn't > > interfer with the ascender on the flat glyph, also. > > > > Which brings me to a question: what consequence would it have to replace > > the X-parent and Y-parent of the fingering to be the NoteHead instead? > > (I guess there will be a need to deal with the accidentals at a minimum) > > And also: how would I go at discovering these consequences without using > > too much of you guys' time? > > > > Thanks again, > > Luca > > > > On Mon, Feb 21, 2022 at 1:22 AM Thomas Morley > > > > wrote: > > > Am So., 20. Feb. 2022 um 22:41 Uhr schrieb Luca Fascione < > > > > > > l.fasci...@gmail.com>: > > > > a) I'm looking for a way to get the fingerings where I want them > > > > without > > > > > > > > using one-note-chord tricks > > > > > > Well, for Fingerings not in chord, like b-1 or -2-1 X-parent > > > is NoteColumn _not_ NoteHead, Y-parent is VerticalAxisGroup. > > > There is no direct way from NoteHead to Fingering and vice versa. > > > > > > Thus putting Fingering in-chord is unavoidable, imho, even for single > > > notes. > > > It is _not_ a trick, but a requirement. > > > > > > Furthermore, you say you set music for classical guitar, then chords > > > will happen anyway, although not in your example. > > > Please note, as soon as more than one in-chord Fingering is present a > > > FingeringColumn is created. Which will make things even more > > > complicated. > > > See > > > https://gitlab.com/lilypond/lilypond/-/issues/6125 > > > https://gitlab.com/lilypond/lilypond/-/merge_requests/732 > > > > > > Sorry to be of not more help, > > > > > > Harm > >
Re: Setting up classical guitar fingerings
This is neat! Thansk Valentin your explanation is very clear. Question: I would have thought it should be the fingering mark to push the beams away, not the other way around, I'm expecting it's uncool to go rummage in the setup of the beams/stems in before-line-break? Or is this how that handshake happens? Another concept that in thus second seems related is the skyline: am I right that this is the running Y direction minmax of the NoteHead+accidentals? If I "just" made the skyline include the fingering marks, would this push the beams up? Or am I just way off my rocker? L On Mon, 21 Feb 2022, 12:27 Valentin Petzel, wrote: > Hello Luca, > > A scheme engraver follows the concept of a closure, so it is some sort of > function that returns different values on different arguments. This is > somewhat > the functional approach to OOP. So an engraver can be seen as an object > that > has some methods, which (as some sort of callback) need to be passed a > reference to the actual engraver. So for example an acknowledger is a > function that takes as argument the engraver itself, the acknowledged grob > and > the original engraver producing the grob. > > About after-line-breaking: Spacing is a kind of problematic thing. Spacing > might rely on line breaking, but line breaking might rely on spacing. Thus > Lilypond first creates some sort of spacing approximation, it then > calculates > the line breaking, and then finalizes the spacing. > > Our problem is that something like Beaming and thus stem length (on which > we > want depend our spacing) are only really fixed after line breaking. > > For such things each grob as two callbacks before-line-breaking and after- > line-breaking that are called on the grob before calculating the line > breaking > and after calculating the line breaking. Using this we can tweak the grobs > after the line breaking is calculated to do what we want. > > In this case I’m using a custom engraver to store Stems and Note Heads > inside > the properties of the Fingering grob (so that we can access them) and then > in > after-line-breaking we take the length of the stem, we check if there is > Beaming on the left side, if there is we get the lowest beam position and > use > it to estimate the height of the Beam (this does still get messed up by > very > slanted Beams, it might be useful to also get a reference to the Beam grob > to > factor in the angle of the Beam). With this we can estimate the free space > between NoteHead and Beam, and depending on this space, shift the > Fingering > grob. > > Cheers, > Valentin > > Am Montag, 21. Februar 2022, 08:58:36 CET schrieb Luca Fascione: > > Hi Valentin, thank you this is super interesting. There's a lot of > > information in there I want to read more carefully, > > but for the moment I have one question: when is after-line-breaking > invoke? > > Or actually, better question: where do I go to discover when (and I guess > > by what) after-line-breaking is invoked? > > > > Another thing at the moment I don't follow is the 'engraver' variable in > > the scheme engraver you wrote: > > where does that come from? (I suspect it's some kind of name available > > where the engraver is invoked, but again: how would I go at discovering > > this?) > > > > Many thanks, this is very helpful > > Luca > > > > On Sun, Feb 20, 2022 at 11:07 PM Valentin Petzel > wrote: > > > Hello, > > > > > > our problem here is that such things like the positioning of beams are > not > > > known for quite some time. But we could use something like > > > after-line-breaking > > > to adjust the results. Somewhat like here. > > > > > > Valentin > > > > > > Am Sonntag, 20. Februar 2022, 21:17:31 CET schrieb Luca Fascione: > > > > So... would anybody be able to lend a hand here please? > > > > > > > > Many thanks > > > > Luca > > > > > > > > On Sat, Feb 12, 2022 at 7:49 PM Luca Fascione > > > > > > wrote: > > > > > Hello, > > > > > sorry for the double-post, I'm unsure whether this should go to > -user > > > > > > or > > > > > > > > -devel. > > > > > > > > > > I'm looking for some guidance to set up fingering on classical > guitar > > > > > sheets. > > > > > > > > > > I am attaching a simple piece of music, with two engraving sets > > > > > > (measures > > > > > > > > 1-5 and 6-10), one "as-is"
Re: Comments wanted on code highlighting in PDF output
Looks lovely to me. I notice the inline source is not highlighted, is that on purpose? (say 2.1.7, page 23). A lot of other text I've seen seems to use the same highlighting patterns for running code as well as display boxes of code, esp given the fonts you picked are so regular in the weight, wouldn't it look better? L On Mon, Feb 21, 2022 at 3:50 PM Werner LEMBERG wrote: > > Folks, > > > Merge request > > https://gitlab.com/lilypond/lilypond/-/merge_requests/1210 > > is now mature enough to produce LilyPond documentation with syntax B/W > highlighting of LilyPond code in PDF output.[*] You can find the > Learning Manual as an example at > > > https://gitlab.com/lilypond/lilypond/uploads/fe6850298173b29d743742f51018235b/learning.pdf > > Please comment! > > > Werner > > > > [*] For release-technical reasons it will take some time until this > gets added to the git repository, though. > >
Re: Setting up classical guitar fingerings
I suspect we might be saying the same thing, Valentin? I was saying infix can be a bit awkward if you want 'pianist' chord fingering (just a stack of numbers above or below), and that your original -1-2-3 reads quite nicely (as in: it's easy to see in your head what you will get in the engraving just by looking at the source). So a keyboard person wouldn't want to use infix, I don't think Whereas a guitar person might find it more attractive to use because it's easier to keep it straight in your head what fingers you use on what note that way L On Mon, Feb 21, 2022 at 5:42 PM Valentin Petzel wrote: > No, not nescessarily. If we want all Fingerings on top or below there is > no real benefit of doing the chord thing. In fact doing that leads to the > exact same issue of the fingering for d being next to the other ones. > > Cheers, > Valentin > > 21.02.2022 12:38:40 Luca Fascione : > > But wouldn't you finger that as ? (Didn't check the number, > I'm just meaning going infix vs postfix) > > I can see that this idea of mine does have issues for fingering your way > around (which seems to me it's more of a fingering atop thing, like you > would have in a keyboard score) > > L > > On Mon, 21 Feb 2022, 12:32 Valentin Petzel, wrote: > >> Hello Luca, >> >> changing the X-parent to the NoteHead would mean that we are aligning the >> Fingering horizontally wrt. the NoteHead instead of the whole NoteColumn. >> This >> would then mean that if for example due to some chord some note heads are >> on >> the other side of the Stem the alignment of something like -1-2-3 >> would >> change (disregarding that it wouldn’t even be clear what note head to >> use). >> >> Cheers, >> Valentin >> >> Am Montag, 21. Februar 2022, 09:19:30 CET schrieb Luca Fascione: >> > Hi Thomas, >> > thanks for your comment, this helps me refine my understanding of >> what's >> > going on. >> > >> > At the same time, while I do see that for other articulations (fermata, >> > appoggiato) this parenting scheme works very well, >> > I remain wondering whether for the style of layout of the fingering >> > indications that I am after, the appropriate thing to do could be to >> change >> > the parenting altogether. >> > >> > If we look at chord for a second, I see the thing as a >> > trick because to me even for proper chords the whole FingeringColumn >> idea >> > is also a weird concept: imagine you're in say C major, and you're >> laying >> > out fingering on the left of a chord like Fm : I'm very >> unclear >> > whether the most readable solution is to have the fingerings stacked >> one >> > atop each other in a column (thereby more distant from f and c because >> of >> > the intervening flat on the aes) or if instead the fingerings on f and >> c >> > should be set tighter to their corresponding note heads and just the >> aes >> > fingering be displaced left horizontally, to allow for the flat. I >> would >> > like to experiment with various possibilities there, visually. I >> suppose >> > you could still displace horizontally inside the column, and then push >> it >> > all inwards closer to the chord even if the bboxes will overlap a >> bit... I >> > anticipate issues such as making sure the fingering for c' doesn't >> > interfer with the ascender on the flat glyph, also. >> > >> > Which brings me to a question: what consequence would it have to >> replace >> > the X-parent and Y-parent of the fingering to be the NoteHead instead? >> > (I guess there will be a need to deal with the accidentals at a >> minimum) >> > And also: how would I go at discovering these consequences without >> using >> > too much of you guys' time? >> > >> > Thanks again, >> > Luca >> > >> > On Mon, Feb 21, 2022 at 1:22 AM Thomas Morley >> >> > >> > wrote: >> > > Am So., 20. Feb. 2022 um 22:41 Uhr schrieb Luca Fascione < >> > > >> > > l.fasci...@gmail.com>: >> > > > a) I'm looking for a way to get the fingerings where I want them >> > > > without >> > > > >> > > > using one-note-chord tricks >> > > >> > > Well, for Fingerings not in chord, like b-1 or -2-1 X-parent >> > > is NoteColumn _not_ NoteHead, Y-parent is VerticalAxisGroup. >> > > There is no direct way from NoteHead to Fingering and vice versa. >> > > >> > > Thus putting Fingering in-chord is unavoidable, imho, even for single >> > > notes. >> > > It is _not_ a trick, but a requirement. >> > > >> > > Furthermore, you say you set music for classical guitar, then chords >> > > will happen anyway, although not in your example. >> > > Please note, as soon as more than one in-chord Fingering is present a >> > > FingeringColumn is created. Which will make things even more >> > > complicated. >> > > See >> > > https://gitlab.com/lilypond/lilypond/-/issues/6125 >> > > https://gitlab.com/lilypond/lilypond/-/merge_requests/732 >> > > >> > > Sorry to be of not more help, >> > > >> > > Harm >> >>
Re: Setting up classical guitar fingerings
Yes exactly, because of how our finger to note relation works, the enhancement in readability with the indication right at the head is enormous. L On Mon, 21 Feb 2022, 18:16 Valentin Petzel, wrote: > Sure. I suppose for a guitar person having stacked fingerings on top would > be > rather confusing, as there is no monotonic relating between finger and > pitch. > As such I suppose guitar people would want to use fingerings with left or > right > orientations in chords anyway. > > Cheers, > Valentin > > Am Montag, 21. Februar 2022, 17:47:58 CET schrieb Luca Fascione: > > I suspect we might be saying the same thing, Valentin? > > > > I was saying infix can be a bit awkward if you want 'pianist' chord > > fingering (just a stack of numbers above or below), and that your > original > > -1-2-3 reads quite nicely (as in: it's easy to see in your head > what > > you will get in the engraving just by looking at the source). So a > keyboard > > person wouldn't want to use infix, I don't think > > > > Whereas a guitar person might find it more attractive to use g-3> > > because it's easier to keep it straight in your head what fingers you use > > on what note that way > > > > L > > > > On Mon, Feb 21, 2022 at 5:42 PM Valentin Petzel > wrote: > > > No, not nescessarily. If we want all Fingerings on top or below there > is > > > no real benefit of doing the chord thing. In fact doing that leads to > the > > > exact same issue of the fingering for d being next to the other ones. > > > > > > Cheers, > > > Valentin > > > > > > 21.02.2022 12:38:40 Luca Fascione : > > > > > > But wouldn't you finger that as ? (Didn't check the > number, > > > I'm just meaning going infix vs postfix) > > > > > > I can see that this idea of mine does have issues for fingering your > way > > > around (which seems to me it's more of a fingering atop thing, like you > > > would have in a keyboard score) > > > > > > L > > > > > > On Mon, 21 Feb 2022, 12:32 Valentin Petzel, > wrote: > > >> Hello Luca, > > >> > > >> changing the X-parent to the NoteHead would mean that we are aligning > the > > >> Fingering horizontally wrt. the NoteHead instead of the whole > NoteColumn. > > >> This > > >> would then mean that if for example due to some chord some note heads > are > > >> on > > >> the other side of the Stem the alignment of something like g>-1-2-3 > > >> would > > >> change (disregarding that it wouldn’t even be clear what note head to > > >> use). > > >> > > >> Cheers, > > >> Valentin > > >> > > >> Am Montag, 21. Februar 2022, 09:19:30 CET schrieb Luca Fascione: > > >> > Hi Thomas, > > >> > thanks for your comment, this helps me refine my understanding of > > >> > > >> what's > > >> > > >> > going on. > > >> > > > >> > At the same time, while I do see that for other articulations > (fermata, > > >> > appoggiato) this parenting scheme works very well, > > >> > I remain wondering whether for the style of layout of the fingering > > >> > indications that I am after, the appropriate thing to do could be to > > >> > > >> change > > >> > > >> > the parenting altogether. > > >> > > > >> > If we look at chord for a second, I see the thing > as a > > >> > trick because to me even for proper chords the whole FingeringColumn > > >> > > >> idea > > >> > > >> > is also a weird concept: imagine you're in say C major, and you're > > >> > > >> laying > > >> > > >> > out fingering on the left of a chord like Fm : I'm very > > >> > > >> unclear > > >> > > >> > whether the most readable solution is to have the fingerings stacked > > >> > > >> one > > >> > > >> > atop each other in a column (thereby more distant from f and c > because > > >> > > >> of > > >> > > >> > the intervening flat on the aes) or if instead the fingerings on f > and > > >> > > >> c > > >> > > >> > should be set tighter t
Re: Comments wanted on code highlighting in PDF output
I haven't worked wirh TexInfo markup before, however it occurs to me that lisp is regular enough that with some effort one could hope to scrape out a majority of the function definitions and then use such a database to touch up the help source? Like if you imagine a strategy like this: - scrape out what you can with a script (targeting to find 90% or so of what's there) - add an exception list hand-curated (which mops up the rest) - use this stuff to find and 'parse' the contents of the help so that you can then transform it into something else this could give you some 90-95% of the source revised. - mop up again the result by hand If this were a one-off affair, it could be a way to go, it sounds more painful that it often ends up being, the key being to find a good balance between how robust your scrapers are wrt how much manual effort is to go back and mop things up. I know the docs for lilypond are a huge set, and I'm not sure how translations are implemented. I'm not suggesting now it's a good time to do this, however if one were to consider such a thing, this seems like it could be a way to do it, purely because Lisp-y things are easy to parse, which makes them relatively robust to detecting decorations such as @var{} I've used pygmentize in other projects and it can look quite beautiful, once you get it going. I like how it's able to provide a unified look to a number of different languages, making the final result look consistent while making it clear what language is what. (I've done a fair bit of LaTeX over the years) Luca On Mon, Feb 21, 2022 at 6:33 PM Jean Abou Samra wrote: > Le 21/02/2022 à 17:42, Luca Fascione a écrit : > > Looks lovely to me. > > > > I notice the inline source is not highlighted, is that on purpose? > > (say 2.1.7, page 23). A lot of other text I've seen seems to use the same > > highlighting patterns for running code as well as display boxes of code, > > esp given the fonts you picked are so regular in the weight, wouldn't it > > look better? > > > > As with the syntax highlighting in HTML output that was > already added (https://gitlab.com/lilypond/lilypond/-/merge_requests/1019, > https://lists.gnu.org/archive/html/lilypond-devel/2021-12/msg00107.html > and other threads), > this is not straightforward to achieve. The problem is that > the Texinfo source uses @code for anything that should > appear in typewriter font. Not all uses of @code are for > LilyPond input. 'git grep -o "@code" | wc -l' will give you > an idea of the amount of effort that would be required to > introduce a distinction ... > > Also, often we use @var inside @code, resulting in italics, > to denote variadic parts (e.g.: "The syntax of @code{\relative} > is @code{\relative @var{pitch} @var{music}}, where @var{pitch} > is ..."). If italics were used for fixed syntactic elements, > there would be confusion between the two uses. > > > Best, > Jean > >
Re: Comments wanted on code highlighting in PDF output
Absolutely! At the moment I am setting up for a couple projects: one is a collection of guitar music (hence the other thread, it's not going to be super large, but I want it look as beautiful as I can). the other is more a scale out kinda thing (I'm seeing if I can help put together a biggish collection of jazz sheets, like a real book, based on the openbook corpus. I have a template based on some of Abraham Lee's work and I'm working on infrastructure to assemble the collection in a way to be as flexible as possible. Also infrastructure and apparata to be generated (possibly in TeX through lilypond-book?) for things like author lists, genre lists and such. Lots for me to learn. Atm I'm stuck trying to use \bookpart from inside a scheme procedure and it's not going too hot I must say. Once I'm done with these, if there's still interest, I could see if I can help with this stuff. I like parsey things fwiw. The idea of parsing lisp was because I was imagining you could scrape the .scm source files to build a database of callables and their signatures and then use that to guide highlighting examples found in the docs. Wasn't aware of your other script L On Mon, 21 Feb 2022, 19:57 Jean Abou Samra, wrote: > Le 21/02/2022 à 19:17, Luca Fascione a écrit : > > I haven't worked wirh TexInfo markup before, however it occurs to me > > that lisp is regular enough that with some effort one could hope to > > scrape out a majority of the function definitions > > and then use such a database to touch up the help source? > > > > Not sure I understand the link with Scheme/Lisp, but > if you want such an autogenerated database, you can grab > this script: > > > https://github.com/pygments/pygments/blob/master/external/lilypond-builtins-generator.ly > > It's the source of the lists of builtins in Pygments. > (I hope to integrate some form of it in core LilyPond > at some point so other tools like Frescobaldi could use > it as well, but I have been too busy lately). > > > > > > Like if you imagine a strategy like this: > > - scrape out what you can with a script (targeting to find 90% or so > > of what's there) > > - add an exception list hand-curated (which mops up the rest) > > - use this stuff to find and 'parse' the contents of the help so that > > you can then transform it into something else > > this could give you some 90-95% of the source revised. > > - mop up again the result by hand > > > > If this were a one-off affair, it could be a way to go, > > it sounds more painful that it often ends up being, the key being to > > find a good balance > > between how robust your scrapers are wrt how much manual effort is to > > go back and mop things up. > > > > I know the docs for lilypond are a huge set, and I'm not sure how > > translations are implemented. > > I'm not suggesting now it's a good time to do this, however if one > > were to consider such a thing, this seems like it could be a way to do > it, > > purely because Lisp-y things are easy to parse, which makes them > > relatively robust to detecting decorations such as @var{} > > > > Unlike its fellow extension language, LilyPond is not > easy to parse *at all* (just glance at lily/parser.yy), > but it is true that Texinfo is easy to parse. I'm not sure > how robust such a script could be, only experience can tell. > > Well, and we're all volunteers here. Feel free to work on > it :-) (Especially since it's a task that can be done with > little prior knowledge of the code base). > > Cheers, > Jean > > > > > I've used pygmentize in other projects and it can look quite > > beautiful, once you get it going. > > I like how it's able to provide a unified look to a number of > > different languages, making the final result > > look consistent while making it clear what language is what. > > (I've done a fair bit of LaTeX over the years) > > > > Luca > > > >
Re: Comments wanted on code highlighting in PDF output
On Mon, Feb 21, 2022 at 9:01 PM Jean Abou Samra wrote: > Are you aware of > > https://myrealbook.vintherine.org/ > > ? > I was not, the material I was working from was the openbook project, by Mark Veltzer. He's done all the heavy work, I'm just working on how to build his stuff and make it beautiful. (and using that work as an opportunity to learn stuff). > Once I'm done with these, if there's still interest, I could see if I > > can help with this stuff. I like parsey things fwiw. > > > > The idea of parsing lisp was because I was imagining you could scrape > > the .scm source files to build a database of callables and their > > signatures and then use that to guide highlighting examples found in > > the docs. Wasn't aware of your other script > > > Yeah, Scheme (at least its Guile incarnation) has enough > reflective power that parsing it by hand is not necessary. > Yes, I've not done that much Lisp, but I have done a lot of TCL when I was younger. TCL is (very approximately) a lisp implementation with a fair few liberties, so aside from parens, ticks, commas and let, I feel relatively at home. Like it would be in lisp, parsing TCL with TCL is kinda the whole point of the language, almost. I must say I miss the $ for dereferencing variables, the way Scheme has it seems more confusing to me. (and the parentheses... take some getting used to, esp with strange indentation patterns) > We actually do some parsing (scripts/build/lilypond-words.py), > and that is what I hope to replace. > ... with? Guile directly? I like messing with language-y parse-y things a lot, if I can help with anything, happy to L
Re: Comments wanted on code highlighting in PDF output
On Mon, Feb 21, 2022 at 9:58 PM Jean Abou Samra wrote: > Not sure what confuses you? In TCL I got used to bare strings being values, not varnames, so I'm learning stuff again. It's just different, but in languages that in many other things are very similar. Of course I don't find it confusing in C or python... But I was just reading the page you suggested: as a beginner I read this example (define capital-cities '((sweden . stockholm) (usa . washington) (germany . berlin) )) As an associative array, where the values are dereferenced variables, but I don't know about the keys, whether they're strings, symbols or also variables. Say the first line, if it was python, would it be capitalcities["sweden"] = stockholm or capitalcities[sweden] = stockholm or capitalcities['sweden'] = stockholm # fake out a symbol with ticks, just for argument's sake The TCL form uses parens for array keys, the setter would be set capital-cities(sweden) $stockholm Note this is exactly equivalent to these two: set capital-cities("sweden") $stockholm set capital-cities({sweden}) $stockholm dblquotes interpolate (like in shell/perl) and curlies don't (like ticks in shell/perl). It's easier to keep one's head straight given there's so little extra typing to guide you while one is still learning. I'll learn, I'm just not there yet. It's the same as in most > other languages (such as C++): a bare name dereferences > a variable. The exception to this is within quotes, > which prevent evaluation of symbols, returning them naked. > That too, interpolating into strings, like you have in TCL/perl/shell ("Hello $username" kinda stuff) is handy :-) I also need to learn comma operator and backtick. > > (and the parentheses... take some getting used to, esp with strange > > indentation patterns) > > In case you want to understand them better: > http://community.schemewiki.org/?scheme-style I do, thanks much, I'll keep that around. Looks handy. > Yes, that is the approach taken in the script I linked. > The .ly file is made entirely of embedded Scheme code. > Neat, I'll read up on that > Before spending significant time on it, though, be sure > to do a dedicated request for comments on the mailing list, > giving concrete examples of how it looks like in the > documentation -- not all ideas meet consensus (syntax > highlighting is a good example of a largely subjective > matter). > Indeed. Thanks Jean L
Re: Comments wanted on code highlighting in PDF output
Cool, as I was saying, once I'm out of the swamp I'm in with these two things I'm trying to get done, I'll see if I can help you L On Tue, Feb 22, 2022 at 9:07 PM Werner LEMBERG wrote: > > > I haven't worked wirh TexInfo markup before, however it occurs to me > > that lisp is regular enough that with some effort one could hope to > > scrape out a majority of the function definitions and then use such > > a database to touch up the help source? > > Having a script that converts `@code` to, say, `@lilycode` (which > would be defined as an alias to `@code` if highlighting is not used) > for both LilyPond and Scheme inline fragments in the documentation > would be very welcomed. > > > - scrape out what you can with a script (targeting to find 90% or so of > >what's there) > > Yes. > > > - add an exception list hand-curated (which mops up the rest) > > Maybe. > > > - use this stuff to find and 'parse' the contents of the help so > >that you can then transform it into something else this could > >give you some 90-95% of the source revised. > > - mop up again the result by hand > > I think these two steps are not necessary. > > > If this were a one-off affair, it could be a way to go, > > It certainly would be a once-only action. > > > I know the docs for lilypond are a huge set, and I'm not sure how > > translations are implemented. > > The translators could use this script, too. > > > I'm not suggesting now it's a good time to do this, however if one > > were to consider such a thing, this seems like it could be a way to > > do it, purely because Lisp-y things are easy to parse, which makes > > them relatively robust to detecting decorations such as @var{} > > Ah, there is probably a misunderstanding. We don't use `@var` within > `@code` to mark syntax but meta-ness, for example > > ``` > @code{foo-@var{XXX}} > ``` > > where @var{XXX} could be a three-digit number. Such situations can > only be handled manually. > > > Werner >
Re: Blockers for Guile 2.2
I expect this has been considered before, but what is it that makes it unpalatable to have a step like initex for TeX to build the .go files upon installation? Wouldn't it solve the issue at hand? (The portability would be addressed by the fact that it's the target platform to build online, and you'd have a certain tolerance for variability at the destination site that you could mop up with a process like that would be difficult to handle apriori, I guess short of producing several built packages) L On Tue, Feb 22, 2022 at 6:29 PM Karlin High wrote: > On 2/22/2022 10:55 AM, Werner LEMBERG wrote: > > In particular, we can't tell non-developers "Please use the > > current development version, which works very reliably" and introduce > > a severe slowness at the same time. > > Perhaps that advice could be suspended for one series of development > versions? Doing one last Guile 1.8 stable release before the Guile 2.2 > transition, then advising to stick with that if interim slowness in > development versions is unacceptable. > -- > Karlin High > Missouri, USA > >
Re: "Structure and interpretation" of Scheme (was: Comments wanted on code highlighting in PDF output)
Thanks Jean, this is very useful and informative. I'll go read sicp (again, last time was several years ago) and meanwhile experiment with the code you sent me. As to format, it's not that I don't get it, it's just that straight up interpolation, being more compact, is better for simple things. In Scheme you do sprintf, that's ok. It's very cool for the bigger things and I certainly prefer it to C++ streams (in C++ I mean). But there too, for complex types being able to overload << is real handy, and if we only had printf() it'd be less fun. I'll go read, might come back with questions, very very grateful in the meantime. One question back to lilypond: as I was saying I theorize it ought to be the fingering to "push up" (in \stemsUp mode) the beaming, can I fiddle around and mess with the stem lengths in before-line-break (or after-line-break)? Thanks again Luca On Mon, Feb 21, 2022 at 10:34 PM Jean Abou Samra wrote: > > > Le 21/02/2022 à 22:19, Luca Fascione a écrit : > > > > > > On Mon, Feb 21, 2022 at 9:58 PM Jean Abou Samra > > wrote: > > > > Not sure what confuses you? > > > > > > In TCL I got used to bare strings being values, not varnames, so I'm > > learning stuff again. > > It's just different, but in languages that in many other things are > > very similar. > > Of course I don't find it confusing in C or python... > > But I was just reading the page you suggested: as a beginner I read > > this example > > (define > > < > http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-8.html#%_idx_190> > > > capital-cities ' > > < > http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-7.html#%_idx_88>((sweden > > > . stockholm) (usa . washington) (germany . berlin) )) > > > > As an associative array, where the values are dereferenced variables, > > but I don't know about the keys, whether they're strings, symbols or > > also variables. > > > > No, the values are also symbols, not dereferenced variables. > > $ guile > GNU Guile 3.0.5.130-5a1e7 > Copyright (C) 1995-2021 Free Software Foundation, Inc. > > Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'. > This program is free software, and you are welcome to redistribute it > under certain conditions; type `,show c' for details. > > Enter `,help' for help. > scheme@(guile-user)> (define capital-cities > '((sweden . stockholm) > (usa . washington) > (germany . berlin) > )) > scheme@(guile-user)> capital-cities > $1 = ((sweden . stockholm) (usa . washington) (germany . berlin)) > > > 'blabla is equivalent to (quote blabla) which prevents > evaluation inside blabla. When a symbol appears in code, > it dereferences a variable, unless it's inside a quote > in which case you get it as a symbol. > > > > Say the first line, if it was python, would it be > > capitalcities["sweden"] = stockholm > > or > > capitalcities[sweden] = stockholm > > or > > capitalcities['sweden'] = stockholm # fake out a symbol with ticks, > > just for argument's sake > > capitalcities['sweden'] = 'stockholm' > > > > The TCL form uses parens for array keys, the setter would be > > > > set capital-cities(sweden) $stockholm > > > > Note this is exactly equivalent to these two: > > > > set capital-cities("sweden") $stockholm > > set capital-cities({sweden}) $stockholm > > > > dblquotes interpolate (like in shell/perl) and curlies don't (like > > ticks in shell/perl). > > > > It's easier to keep one's head straight given there's so little extra > > typing to guide you while one is still learning. > > > I know nothing about Tcl, but in Scheme there is no > difference between dereferencing a variable and dereferencing > a "command". An unquoted symbol that happens to be evaluated > deferences a variable, that's all. In this sense, I guess > you could think of it as if your Tcl code read > > $set capital-cities("sweden") $stockholm > > with $ in front of set, because set! is really a variable > that has a value in the same namespace as any other variable > (OK, it won't work if you write it as-is in Guile >= 2 due > to the way that has evolved, but set! still has a value > you can retrieve with (module-ref (current-module) 'set!) .) > > Yes, I know, (set! x ...) does not
Re: Blockers for Guile 2.2
In case it's useful, I'll share my impressions as a recent addition to this group. I have some experience with rolling out software, gathered in a different field. Where I come from we release often (I think we've averaged in the 30+ cuts per year, roughly 2 every 3 weeks), and our users have tolerance for the occasional rollback, and they roll forward relatively happily. One very important difference is that in that environment it is built into the infrastructure a solid mechanism for picking the software version per project, so that you could be working on several different versions of the software in the same day or so that were transparently tracked for you to stay on their corresponding working files. It seems to me the people that I hear speaking (maybe with the exception of HanWen) have a low level of confidence in the ability of the regression/test suite to provide adequate coverage of the scenarios needed by your current users. (Otherwise I don't even know why this is a discussion in the first place) I would consider this your highest priority: trusting your tests is absolutely crucial. A release going out that passes tests and fails in the field should be brought to be a "0 probability" event. This won't ever be the case for real, hence the occasional rollback: when user X rolls back, a test is made and added to the suite so this won't happen again (preaching to the choir, I get it). I wouldn't give a second of thought to folks rolling back a binary as long as a) you're proactive in saying clearly that there's be a problem and here's how you fix it (roll back, and roll forward once you get the next binary, say); b) this happens rarely, obviously. The other aspect that I observe reading this exchange is that nobody is talking about Scheme source in client code. I see you discussing how to pick up the differences between Guile 1.8 and 2.2 in the shipped source, but I don't recall seeing a discussion focusing on user-side changes. And in fact, this is a similar point to the previous one: either these differences are there and are material or they are not. In the second case, you should go ahead and move to 2.2. In fact I'd think you ought to ship current-ish (read non 2.2-specific) scheme source (with bugfixes) with the new runtime, and let that soak in userland for a bit (say 2.24 is that, and then 2.26 contains 2.2 specific source). As I read some people saying certain distributions are already shipping binaries based on Guile 2.2, I suspect maybe 2.22 (on those distributions) is a passable proxy for this stage. In the first case (Guile 1.8 vs 2.2 actually makes a real difference in source semantics and correctness(*), forget about the speed) your first concern must be what to do about users migrating their source and what cost will that be, in terms of time investment, instabilities or defects introduced and all that, which will dominate your ability to reduce your maintenance exposure making sure the majority of your user base will in fact pick up the new release line. (*) not that this seems to me a very realistic thing for the Guile folks to do intentionally Last thought: as I am currently learning Scheme and Guile, and I noticed 3.0.x has been out for a couple years now and seems to be benchmarking with speeds comparable to the 1.8.x line (according to their release notes). Given the switch to 2.2 hasn't happened yet, and as I am reading through these emails, it has been a long process, wouldn't moving to 3.0 instead be a better way to capitalize on the effort and push out the next round of this level of pain to a later date? (Again, I am expecting this has been discussed before, but still occasionally it's good to re-evaluate decisions: circumstances change, new evidence comes up, time goes by) I do realize this will stir many folks, and that this subject is understandably very sensitive, but it's possible that a pause and refocusing could be of use. I feel a lot of drive and energy and dedication to the project from all of you, and it seems very clear to me the folks that are writing maintain very high ethical standard and truly want the best for the project. This is the reason why I joined lilypond-devel btw, the people are awesome. (and I'll admit I also think the program is pretty cool ;) ). However, I also think I observe you all having a difficult time trusting that things will go well, there seems to be a high level of fear that "something bad" will happen. If I may propose a perspective: don't have fear of that, because something will break for sure. Nobody can avoid that, but what you can do, and what will make a positive difference to your users is your ability to help them through the stumble, support them in their difficulties, and keep moving ahead together. Thanks for your time, hopefully some of these thoughts can be of use L
Re: Blockers for Guile 2.2
On Thu, Feb 24, 2022 at 5:44 PM Jonas Hahnfeld wrote: > I will not reply to most of your message; I suspect that your > experience comes from a corporate environment where people are paid > full time to work on software. It does, yes. > In my opinion, many of the points are > simply not relevant in a relatively small community of volunteers, for > example the release frequency (which is already quite high for an open > source project I would say). Same for testing, in particular all > possible combinations of dependencies. > Forgive me Jonas, I don't follow. It seems to me any means to improve the quality of the software development process would be fair game, no? A better process means all the folks involved maximize the ratio of effort into the project to results coming out into the hands of users, I don't understand why this would be undesirable to any community of engineers no matter whether they write this code because it's their job or their passion after hours. If anything, I'd think the folks that do it after hours would be the most interested in focusing on the fun side and not on the "fix the boring bugs and regressions" side, no? > The one thing I want to get straight are the comments about Guile 3.0 > because that claim keeps coming up: > > Am Donnerstag, dem 24.02.2022 um 09:13 +0100 schrieb Luca Fascione: > > [...] 3.0.x [...] seems to be benchmarking with speeds comparable to > the 1.8.x line > > It's important to differentiate between their benchmarks and the real- > world impact on a complex project like LilyPond. There have been > preliminary tests and they indicate that it's still a lot slower than > Guile 1.8 without bytecode. Whether it will be faster than Guile 2.2 > for our use-cases afterwards, I don't know. > I see. Cool, I wasn't aware. Great to hear it's already been tested, I just said because nobody was talking about it, and HanWen asked folks about the perspective of the Guile engineers for upcoming evolutions. Seems pretty clear from a casual browse of their page that they don't intend to work on 2.2 any longer, and that their efforts are on 3.x now. > > Given the switch to 2.2 hasn't happened yet, and as I am reading > > through these emails, it has been a long process, wouldn't moving to > > 3.0 instead be a better way to capitalize on the effort and push out > > the next round of this level of pain to a later date? > > The question is, would this make things better? Jumping across even > more versions certainly doesn't promise to be an easier transition. > I guess I was looking at it from the opposite end: imagine it makes _this_ transition a little harder, but then postpones the next one (which will inevitably come) by several years. Wouldn't this be an interesting deal for your users? (and engineers, of course) I guess I'm saying: say that in 2025 you'll have to go to Guile 3 anyways (maybe because 2.2.x goes unsupported or whatever reason). What tells us that change will be easier than this? On the other hand, if you were to adopt 3 today, you can gradually upgrade it as you go and try and stay on the 3.x train (maybe you do a tick/tock thing where you pull guile up every other version bump, as a random example). Or anyways be on a supported runtime for a few years longer. Now of course, if instead of this being a "little" harder, it is in fact a "lot" harder, none of what I'm saying makes any sense. I don't know the specifics, but I know the pattern can reduce effort and improve the "fun" to "drudge" ratios, when the conditions are right. Cheers, L >
Re: Blockers for Guile 2.2
Jean, how many times did you run these tests? Eyeballing your numbers it seems there's effectively no difference in execution time opt/no-opt and 2.2/3.0. Is the 5% a stable figure, or is it just a one-sample thing? Would it be a passable inference that the reason the optimizer has effectively no measurable impact in either runtime is that the scheme source code runs are comparatively short and contain large amounts of code the optimizer can't change? I'm imagining that if your source is largely an alternation of scheme language built-ins (or scheme library code) interspersed with fairly frequent calls into lilypond's brain, the optimizer can't do much about the latter. At the same time, you might be sitting in front of gains coming from making these API calls more efficient, which could be interesting (albeit largely orthogonal to the discussion at hand). I'm not sure how division of cost works out, if there is overhead in preparing for invoking the callbacks vs executing them, for example. I guess insight in that could help focus effort, if any was warranted. I thought the -O0 compilation time in Guile 3.0 was _really_ cool, I guess it indicates the front-end of the 3.x compiler is vastly more efficient? Seems like it could be an interesting way forward for the dev group to run 3.x with -O0 for iteration cycles, and then do what David is saying to ship the scm file with optimizations on and have the in-score Scheme be just built -O0. Reading briefly the message you posted, it seems -O1 might be a better way to go still, which might regain a teeny tiny bit of speed at a potentially very modest cost (say even if your 3.5s become 4 or 5, you still come out with a net win compared to 2.2's 20s). I agree these results are a very cool finding. L On Sat, Feb 26, 2022 at 2:47 PM Jean Abou Samra wrote: > Le 26/02/2022 à 13:51, Han-Wen Nienhuys a écrit : > > The Scheme compilation felt much slower, and for C++ ccache takes away > > a lot of the pain of recompiles. It also appears to be > > single-threaded? I admit not having timed it in detail. > > OK, I have very good news regarding compilation speed. > Tests are done with > > rm -rf out/share/lilypond/current/guile/ && time out/bin/lilypond > <...snip...> > For one thing, Guile's optimization make about zero difference for the > speed of the resulting LilyPond executable. For another, disabling > optimizations in Guile 2 already results in a good speedup (1min > to 20s), and while Guile 3 is even slower than Guile 2 at the default > optimization level (1m30), with optimizations disabled it becomes > near instant (3.5s). > > Guile 3 being far better at compilation speed with zero optimizations > apparently comes from what is described in > > http://wingolog.org/archives/2020/06/03/a-baseline-compiler-for-guile > > On the other hand, it does look like Guile 3 is a little slower > than Guile 2 on execution time (5%). > > What do you think? > > Jean >
Re: Blockers for Guile 2.2
On Sat, Feb 26, 2022 at 10:48 PM Jean Abou Samra wrote: > > [Jonas] > > He, I always thought auto-compilation didn't optimize! 😕 now don't > > tell me Guile also applies optimizations while just reading and > > supposedly interpreting code... > > I don't think it does. At least, you don't usually call eval or > primitive-eval on code to run it repeatedly, just for one-shot > code, so I don't see the sense it would make. Also, it seems > to me that the Guile developers are much more focused on compilation. > Randomly, it seems to me my own problem with fingering might possibly bring up a counter example pattern (for eval vs compile): when you attach a callback to something that is evaluated often (in the case of fingering there is an after-line-break callback, which I am guessing runs O(count(Fingering)) ~= O(count(NoteHead)). Isn't this a case of scheme code coming in from the .ly file (likely an include ly file) and being run O(N) times? I can't say I understand lilypond super well, but it doesn't feel like it would employ often alogrithms that are superlinear in the NoteHead count (it seems most activities would be organized as linear scans), so I'm guessing O(N) is as bad as it gets for a cursory evaluation? In other words, is it approximately true that "for (almost) any real-life score the total compilation time is proportional to the number of NoteHeads, past a certain size"? I'm guessing you need a few pages worth of material to kill away constant overheads, but beyond that is it true that if you double the source size you double the compilation time? The background for this was what David was saying, whether code in .ly files would be optimized or not. At a guess, I'd think stuff that is big and runs O(n) times might make some sense to see if need to optimize. I am sensitive to your other passage of the nightmare it will be to keep this stuff around and properly invalidate it upon changes. > [ skipping over the part regarding Guile 3, since I don't think it's > > relevant here ] > > Perhaps I should have changed the title, but I do think it > is relevant -- it gives hope for a state where development > cycles will be easier. When we want Guile 3 is another question. > I'm in favor of not making the move to Guile 2 wait more > than it already has, but I wonder if at some point in this release > cycle (by which I mean before the next stable release) we will want > to switch to Guile 3. > fwiw, this sounds super reasonable to me. I was just trying to get orders > of magnitude (for Guile 3 it's 1m30 vs. 4s, no need for > precise benchmarks to see which is faster :-). > Indeed. I was more thinking that not only the opt/no-opt numbers are close, but also 18-ish to 19ish seconds are close, it is possible that difference too is spurious for some reason (I guess I'm saying: there's a possibility you have been a little lucky or a little unlucky, and your actual runtime difference is closer to 2% or maybe 15%, but you happened to take samples at "weird" times. I might have some time next week (after 7 march) to run these tests several times, depends on some stuff I have going on for next weekend. I'll contact you oob if I do for some quick guidance exchange. I am no performance expert, but LilyPond's performance-critical parts > are written in C++, so in general I would think the cost of Scheme code > is spread a bit all over the code base (unlike C++ where you can find > some rather critical parts taking lots of time, like page breaking, > or skyline algorithms). Well but this would be single call from scheme into C++ that take "a long time", do I read you right? Instead I was thinking more of the "dying from a thousand cuts" kinda scenario. [continue below] > I am not sure what you mean by effort spent > in "preparing" callbacks, could you elaborate? > Imagine the grob is a C++ object that remains opaque to Scheme. So it's a thing for which in Scheme you move around a blind pointer, but to read property Y-offset you'd call (grob-get 'Y-offset) and grob-get is C++. And then in C++ you'd just have a one-liner accessor that is conceptually float scheme_grob_get(Grob* grob, sch_value* propname ) { // typecheck propname to be a symbol // find out if you have a symbol and what accessor you want to delegate to return grob.get_y_offset(); // actual delegation } and in turn the getter is something like inline float get_y_offset() const { return y_offset; /* this is a float member, say */ } Code following a simple pattern like this, once compiled, will largely be dominated by the scripting language runtime overhead in traversing through the callback table, finding the function pointer for the dispatch, marshalling the values from scheme representation to something you can mess with in C++ and the way back upon return. I've read enough of your C++ to see that a lot of this happens in client code, but either way it's all code that dominates in cost the execution of the accessor its
Re: Blockers for Guile 2.2
On Sun, Feb 27, 2022 at 12:13 PM Han-Wen Nienhuys wrote: > On Sun, Feb 27, 2022 at 10:39 AM Luca Fascione > wrote: > > is it true that if you double the source size you double the compilation > time? > > it should be, but we have rather complicated page breaking code that > is so hairy that nobody understands it fully. I'm not sure there is > NP-complete snake in the proverbial grass there. > Understood. As a use of both lilypond and LaTeX I have been idly wondering for years whether modern computers can afford to use global line/page breaking algorithms that would avoid some of the shortcomings of TeX's approach. A discussion for a different thread, of course. accessing data (eg. offsets): > * on first access, the callback gets executed. This is just evaluating > ( ). > * on second access, the value is cached in an alist, and looking it up > is extremely cheap. > This is cool. :-) I don't know enough about this program to even begin to have a gut feeling, however I guess I'm thinking it seems there would be tons of these reads, and I'm hearing you say that in an eventual sense, all data access is an alist access. I don't know how alists are actually implemented under the hood, but they feel like they would be a linear scan with symbol-typed keys on the surface. So to pull out one float you're doing what 5-10 64bit compares (checking the keys) and just as many pointer jumps, right? (I'm thinking the alist is a list of symbol/value pairs in the implementation also). This cost strongly dominates the float dereference itself, and there is this question of how much extra stuff is happening around it (I guess in my mind I'm comparing it to element access in C++ which is one pointer (this), one offset for the member (compiled into an immediate), one load of this+offset (which the hardware helps you with)). I feel for the moment I can't provide any concrete insight into any of this, because I don't know the specifics enough. > Code following a simple pattern like this, once compiled, will largely be > dominated by the > > scripting language runtime overhead > > From the outside this may seem plausible, but I doubt your intuition here: > > * the callback tables are also alists. They are cheap (they aren't > sped up if you swap them for hash tables) > Not presuming to know your program better than you, but I'd just bring up that this is saying that your lists are short (likely length 20ish on average): the observation you report is that hashing so you can do a direct access into an array is not faster than several pointer-pointer comparisons and pointer chases. The hash you'd use here be something like FNV or so, it'll break even somewhere in the 10-20 comparisons, I'd expect. > * Scheme has no marshaling: objects are either direct (scheme -> C++ > is bit twiddling), or they are indirect (a tagged pair with a C++ > pointer in the cdr) > Half of that I expected (more specifically, for various reasons, a number of which not accurate, I expected the Scheme APIs to be similar to the TCL APIs, and there as well you just get handed straight what TCL has in hand, not marshalling involved). One thing I didn't know is that the client calls to extract the machine representation of the value would be super cheap. But still, if the guile compiler translates scheme values into native ones and is able to leave them there for "long" stretches of code in some cases, and our use case instead prevents that, it seems it could eventually add up. Again I do need to learn the source better before you give these thoughts any real weight. > IMO The real problem is that we don't have good tooling to see what is > going on in the Scheme side of things: C++ has decent perf analysis, > but the Guile side of things just looks like a lot of time spent in > scm_eval(). Some of it is overhead, some if it might be a poorly > implemented Scheme function (which is often easier to fix than > reducing overhead.) > Very agreed that poorly conceived code is the first thing to address, no doubt. I'd think that the way to gain insight as to what's going on is to inspect the bytecode actually, and gain familiarity with the APIs that execute it. Is it that the bytecode is then translated to executable, or is it running on a VM? I would assume they don't provide a decompiler of any sort, do they? Thanks for a most interesting discussion L
Re: Setting up classical guitar fingerings
I took a "brief" detour where I went and learned a bit about scheme Interlude FWIW, I don't recall seeing this reference in your resources about learning Scheme, so I'll leave a comment here: Paul Wilson (professor at UTexas, Austin) wrote some notes on Scheme approx in 1996, which I thought were extremely well conceived and clear. Certainly more to the point that sicp (which is a very interesting approach if you're learning Lisp and sw engineering at the same time, but I found did not make good use of my pre-acquired understanding of the field, and just slowed me down). I found the book very easy to follow and I loved that he teaches Scheme by showing a series of gradually more accurate implementations of scheme, partially in scheme. Also it teaches Scheme in a way that is very compatible with other programming languages, that makes it really easy to keep it straight in one's head what is new information or different vs what is just the same. Unfortunately the several available copies online were never completed, so there are several little bugs, missing cross-references and other polish-grade features missing. But I thought the juice of the book is certainly excellent. It seems Prof Wilson has since retired, and various threads on the internet indicate folks are unable to reach him. He seems to have done work in the 90's about garbage collection in languages. Now that I can follow Scheme source with a certain level of confidence, I went back to work on the fingering task, and re-read Valentin's code. And thanks again kind sir. I was thinking again about the idea of instead of pushing the fingering "down" (say we're doing \stemsUp) because the beams are too low, to push the beams "up" to leave space for the fingering. And I played for a second with Stem.details.beamed-lengths (using an override in the .ly file, inbetween the notes) which seems to achieve what we need. So in order to automate that, I'd like to understand better the execution of before-line-break and after-line-break callbacks. I'm thinking that I could set up the fingering position in Fingering.before-line-break and alter the stem length (details.beamed-lengths, being my current thinking) in Stem.before-line-break. Really the question is a meta-question: I can certainly hack at this and see what happens, but also how I do help myself learn this better: where is the code that handles this stuff and how do I trace the sequence of events around this stuff? I'm looking for a couple one-liner breadcrumbs such as "this section of the docs" (which I tried to read without much success), "grep and in the source", it's all in "scm/". Vague pointers like that are hopefully all I'll need. Many thanks, Luca On Mon, Feb 21, 2022 at 6:49 PM Luca Fascione wrote: > Yes exactly, because of how our finger to note relation works, the > enhancement in readability with the indication right at the head is > enormous. > > L > > On Mon, 21 Feb 2022, 18:16 Valentin Petzel, wrote: > >> Sure. I suppose for a guitar person having stacked fingerings on top >> would be >> rather confusing, as there is no monotonic relating between finger and >> pitch. >> As such I suppose guitar people would want to use fingerings with left or >> right >> orientations in chords anyway. >> >> Cheers, >> Valentin >> >> Am Montag, 21. Februar 2022, 17:47:58 CET schrieb Luca Fascione: >> > I suspect we might be saying the same thing, Valentin? >> > >> > I was saying infix can be a bit awkward if you want 'pianist' chord >> > fingering (just a stack of numbers above or below), and that your >> original >> > -1-2-3 reads quite nicely (as in: it's easy to see in your head >> what >> > you will get in the engraving just by looking at the source). So a >> keyboard >> > person wouldn't want to use infix, I don't think >> > >> > Whereas a guitar person might find it more attractive to use > g-3> >> > because it's easier to keep it straight in your head what fingers you >> use >> > on what note that way >> > >> > L >> > >> > On Mon, Feb 21, 2022 at 5:42 PM Valentin Petzel >> wrote: >> > > No, not nescessarily. If we want all Fingerings on top or below there >> is >> > > no real benefit of doing the chord thing. In fact doing that leads to >> the >> > > exact same issue of the fingering for d being next to the other ones. >> > > >> > > Cheers, >> > > Valentin >> > > >> > > 21.02.2022 12:38:40 Luca Fascione : >> > > >> > > But wouldn
Re: Setting up classical guitar fingerings
Thanks Valentin, this is useful. Sounds like I'll be back with questions :-) L On Sat, Mar 5, 2022 at 5:46 PM Valentin Petzel wrote: > Hello Luca, > > the design of Lilypond inherently implies that there is no clear border > between users and developers. Lilypond has an user interface, which is > covered > more or less in the docs, an extended interface in scheme, which is not > documented that extensively, and the C++ code that works behind this, > which > barely documented. This means that in the Lilypond-verse it is useful to > speak > Lilypond, Scheme, C++ (and maybe Python), with basic users sitting at the > Lilypond end and developers sitting at the C++ side, with the scheme stuff > hanging in between. > > This means that you as a user can turn into a developer of some sort and > write > code into you Lilypond score that drastically changes how Lilypond does > things > (which is kind of cool). But if you are trying to do this you will often > find > yourself in the situation that you do not know how certain things are > handled > and the docs offer very little support. > > Thus I find it easier to look directly into the code. Lilypond has three > directories of relevant code in it’s source directory: > > → Ly: Which contains code in Lilypond-Language, mostly setting up the > defaults > and music functions and such > → scm: Which contains code in Scheme and contains a decent bit of > Lilypond’s > functionality, at least that part which does not matter that much > performance- > wise > → lily: Which contains the c++-Code, which is the major part of the core > functionality. > > If I’m looking for things I usually grep for these things in these > directories. Eventually you’ll have a good idea where what is sitting. For > example a scheme function like ly:somethingA::somethingB is usually a > callback to the c++ class somethingA, method somethingB. > > In scm you also have the definitions of the default grobs, so if you want > to > know what exact properties a grob has, you can look in define-grobs.scm. > And > similar stuff. > > And if you encounter something you really do not understand, ask the list. > We’ve got some really marvellous people here who appear to know about > everything you might want to know about Lilypond. > > Cheers, > Valentin > > Am Samstag, 5. März 2022, 17:05:22 CET schrieb Luca Fascione: > > I took a "brief" detour where I went and learned a bit about scheme > > Interlude > > FWIW, I don't recall seeing this reference in your resources about > learning > > Scheme, so I'll leave a comment here: > > > > Paul Wilson (professor at UTexas, Austin) wrote some notes on Scheme > approx > > in 1996, > > which I thought were extremely well conceived and clear. Certainly more > to > > the point that sicp > > (which is a very interesting approach if you're learning Lisp and sw > > engineering at the same time, > > but I found did not make good use of my pre-acquired understanding of the > > field, and just slowed me down). > > I found the book very easy to follow and I loved that he teaches Scheme > by > > showing a series of gradually > > more accurate implementations of scheme, partially in scheme. Also it > > teaches Scheme in a way that is > > very compatible with other programming languages, that makes it really > easy > > to keep it straight in one's head > > what is new information or different vs what is just the same. > > Unfortunately the several available copies online were never completed, > so > > there are several little bugs, > > missing cross-references and other polish-grade features missing. But I > > thought the juice of the book is certainly excellent. > > > > It seems Prof Wilson has since retired, and various threads on the > internet > > indicate folks are unable to reach him. > > He seems to have done work in the 90's about garbage collection in > > languages. > > > > > > Now that I can follow Scheme source with a certain level of confidence, I > > went back to work on the fingering task, > > and re-read Valentin's code. And thanks again kind sir. > > > > I was thinking again about the idea of instead of pushing the fingering > > "down" (say we're doing \stemsUp) because the beams are too low, > > to push the beams "up" to leave space for the fingering. And I played > for a > > second with Stem.details.beamed-lengths > > (using an override in the .ly file, inbetween the notes) which seems to > > achieve what we need. > > So in order to auto
Re: "Structure and interpretation" of Scheme (was: Comments wanted on code highlighting in PDF output)
Doubling up part of a different reply, in case somebody might find this useful at some point in the future: I went and learned about Scheme. Obv the classic reference would be SICP (Structure and Intepretation of Computer programs) But as I was reading stuff I stumbled into this pdf by Prof Wilson of UTexas/Austin which was a resource far more suitable for my needs (knows how to program in several languages, just need to learn this language, and be precise about the semantics). So I wrote a few lines of comment on the book, here they are. If they should go somewhere specific, let me know if I can help with that. FWIW, I don't recall seeing this reference in your resources about learning Scheme, so I'll leave a comment here: Paul Wilson (professor at UTexas, Austin) wrote some notes on Scheme approx in 1996, which I thought were extremely well conceived and clear. Certainly more to the point that sicp (which is a very interesting approach if you're learning Lisp and sw engineering at the same time, but I found did not make good use of my pre-acquired understanding of the field, and just slowed me down). I found the book very easy to follow and I loved that he teaches Scheme by showing a series of gradually more accurate implementations of scheme, partially in scheme. Also it teaches Scheme in a way that is very compatible with other programming languages, that makes it really easy to keep it straight in one's head what is new information or different vs what is just the same. Unfortunately the several available copies online were never completed, so there are several little bugs, missing cross-references and other polish-grade features missing. But I thought the juice of the book is certainly excellent. It seems Prof Wilson has since retired, and various threads on the internet indicate folks are unable to reach him. He seems to have done work in the 90's about garbage collection in languages. = HTH Luca On Tue, Feb 22, 2022 at 10:08 PM Jean Abou Samra wrote: > Le 22/02/2022 à 21:46, Luca Fascione a écrit : > > Thanks Jean, this is very useful and informative. > > I'll go read sicp (again, last time was several years ago) and > > meanwhile experiment with the code you sent me. > > > > As to format, it's not that I don't get it, it's just that straight up > > interpolation, being more compact, is better for simple things. > > In Scheme you do sprintf, that's ok. It's very cool for the bigger > > things and I certainly prefer it to C++ streams (in C++ I mean). > > But there too, for complex types being able to overload << is real > > handy, and if we only had printf() it'd be less fun. > > > > The disadvantage of Scheme is that this is not predefined. The advantage > is that leveraging the power of hygienic macros you can write it yourself. > > > \version "2.22.1" > > #(use-modules (ice-9 match) >(ice-9 receive)) > > #(define-macro (interpolate str) > (let loop ((parts (string->list str)) >(acc '()) >(vals '())) > (match parts >(() > `(format #f > ,(apply string (reverse! acc)) > . ,(reverse! vals))) >((#\{ . rest) > (receive (part remaining) > (break! (lambda (c) (eqv? c #\})) > rest) > (let ((sexpr (call-with-input-string > (apply string part) > read))) > (loop (cdr remaining) > (cons #\a (cons #\~ acc)) > (cons sexpr vals) >((char . rest) > (loop rest > (cons char acc) > vals) > > #(define person "Luca Fascione") > #(define email "l.fasci...@gmail.com") > > #(display (interpolate "{person} <{email}>")) > > > > > I'll go read, might come back with questions, > > very very grateful in the meantime. > > > > One question back to lilypond: > > as I was saying I theorize it ought to be the fingering to "push up" > > (in \stemsUp mode) the beaming, > > can I fiddle around and mess with the stem lengths in > > before-line-break (or after-line-break)? > > > > The length of a beamed stem is determined by the beam (see > the comment above quantized-positions in define-grobs.scm). > In before-line-breaking, there isn't much you can usefully > do since horizontal spacing is not yet known, so the beam > doesn't have a lot information to work with (it does try to > make so-called "pure" estimates but trust me, you don't want > to worry about that). In after-line
Re: Setting up classical guitar fingerings
So, I feel like I'm making progress here. However I am now at a different stumble, I feel I'm misunderstanding the referencing patterns of the grob properties somewhere. You'll see in the attached pdf that all the stems are very long. Here's what's going on: I'm experimenting with pushing the beaming out a little when there is fingering on the note head. However my tweak to the steam lengths is applied to all stems in the piece, _each time_, instead of once per note. More in detail: Riffing on Valentin's idea, I reversed the behaviour of the guitar fingering engraver, and now when there is a fingering, I attach a reference it to the stem's 'details alist. This is then picked up in a new Stem.before-line-breaking callback: my idea was: if there is a fingering instruction attached to the grob (being a stem), simply do a += 1 on the values of the beamed-lengths list. (obv this "+1" will need to take into account positioning and size of the fingering notation, in real life, but for a first test I thought it'd be a start). However I'm running into a problem where I seem to be modifying the setup for the 'details of _all_ the Stem grobs in the sheet, which means that my intention of adding 0.25 staff spaces (for the sake of the example) to the one stem at hand to the callback, is turning into this monster where _all_ stems get added 0.25 times (fingering counts in the whole piece) . And that's a bit much :-) I've tried various approaches to copying the dtls variable with (list-copy dtls) and a couple things like that, but I wasn't able to affect the final result at all. Could anybody help me understand what's going on please? Many thanks Luca On Sat, Mar 5, 2022 at 6:27 PM Luca Fascione wrote: > Thanks Valentin, this is useful. > > Sounds like I'll be back with questions :-) > > L > > On Sat, Mar 5, 2022 at 5:46 PM Valentin Petzel wrote: > >> Hello Luca, >> >> the design of Lilypond inherently implies that there is no clear border >> between users and developers. Lilypond has an user interface, which is >> covered >> more or less in the docs, an extended interface in scheme, which is not >> documented that extensively, and the C++ code that works behind this, >> which >> barely documented. This means that in the Lilypond-verse it is useful to >> speak >> Lilypond, Scheme, C++ (and maybe Python), with basic users sitting at the >> Lilypond end and developers sitting at the C++ side, with the scheme >> stuff >> hanging in between. >> >> This means that you as a user can turn into a developer of some sort and >> write >> code into you Lilypond score that drastically changes how Lilypond does >> things >> (which is kind of cool). But if you are trying to do this you will often >> find >> yourself in the situation that you do not know how certain things are >> handled >> and the docs offer very little support. >> >> Thus I find it easier to look directly into the code. Lilypond has three >> directories of relevant code in it’s source directory: >> >> → Ly: Which contains code in Lilypond-Language, mostly setting up the >> defaults >> and music functions and such >> → scm: Which contains code in Scheme and contains a decent bit of >> Lilypond’s >> functionality, at least that part which does not matter that much >> performance- >> wise >> → lily: Which contains the c++-Code, which is the major part of the core >> functionality. >> >> If I’m looking for things I usually grep for these things in these >> directories. Eventually you’ll have a good idea where what is sitting. >> For >> example a scheme function like ly:somethingA::somethingB is usually a >> callback to the c++ class somethingA, method somethingB. >> >> In scm you also have the definitions of the default grobs, so if you want >> to >> know what exact properties a grob has, you can look in define-grobs.scm. >> And >> similar stuff. >> >> And if you encounter something you really do not understand, ask the >> list. >> We’ve got some really marvellous people here who appear to know about >> everything you might want to know about Lilypond. >> >> Cheers, >> Valentin >> >> Am Samstag, 5. März 2022, 17:05:22 CET schrieb Luca Fascione: >> > I took a "brief" detour where I went and learned a bit about scheme >> > Interlude >> > FWIW, I don't recall seeing this reference in your resources about >> learning >> > Scheme, so I'll leave a comment here: >> > >> > Paul Wilson (professor at UTexas,
Re: Setting up classical guitar fingerings
Hi Valentin, thanks for the super prompt reply! On Sun, Mar 6, 2022 at 5:34 PM Valentin Petzel wrote: > So instead of doing the assoc-set! you might want to do something like > > (ly:grob-set-property! grob 'details `((beamed-lengths . ,stem-bmlgths) > . ,detls)) > For my edification, I'll talk for a moment about the differences between your code and this other attempt (which also seems to work, I had gotten myself mixed up with what copy-list does, and I had copy-list here, which isn't deep enough): (ly:grob-set-property! grob 'details (assoc-set! (copy-tree detls) 'beamed-lengths stem-bmlgths)) could you set me straight, if needed? - Your code "just prepends" a new cell with a key we're interested in right in front of the old content (with which shares structure) - It relies on lists being ordered, and all access to alists being a straight linear scan front-to-back (point being: 'beamed-lengths appears twice and we use "first one wins" to deal with the repeated key) - Your way is probably more economical than mine in that mine copies all entries in the list, while yours just prepends one onto a list that is otherwise shared - Your quasiquote segment produces exactly the same output as (append '(('beamed-lengths . stem-bmlgths)) detls) This solution of yours does not do any collision checking, which will make > all > Stems longer as long as there is a Fingering on the Stem. You might want > to > try something similar to what I did before to get some sort of collision > checking in. > Yes definitely, I was just illustrating the problem. I'll definitely need much more careful code than this in the final solution. However, just so I don't overlook some aspect, could you give me a sense of what collisions worry you? One of the things I had found attractive in messing with beamed-lengths is that the rest of the layout engine stays in place and operative. So that line-to-line and beams-to-spanner things are handled by the rest of lilypond as it stands today now? (This was my reasoning also in trying to avoid xxx-offset stuff, because if I understand right it happens post conflict-resolution which seems not what I need). Am I not thinking this right? Many thanks Luca
Re: "Structure and interpretation" of Scheme
Yip! https://www.cs.utexas.edu/ftp/garbage/submit/notready/schintro.ps and ftp://ftp.cs.utexas.edu/pub/garbage/cs345/schintro-v14/schintro_toc.html But without ftp support in the browser this is annoying to read Neither link feels like it would be around long term, and the rendering is not great. But as I said, I feel it's great quality content, if you look past the surface L On Sun, Mar 6, 2022 at 8:28 PM Werner LEMBERG wrote: > > > I went and learned about Scheme. Obv the classic reference would be > > SICP (Structure and Intepretation of Computer programs) But as I was > > reading stuff I stumbled into this pdf by Prof Wilson of > > UTexas/Austin [...] > > Link, please. > > > Werner >
Re: "Structure and interpretation" of Scheme
I was wondering how to do exactly this actually :-) Thanks Werner! L On Mon, 7 Mar 2022, 08:06 Werner LEMBERG, wrote: > > > https://www.cs.utexas.edu/ftp/garbage/submit/notready/schintro.ps > > > > [...] and the rendering is not great. > > Attached you can find a PDF version of `schintro.ps` that replaces all > bitmap fonts in the above documents with real outline fonts. The > corresponding script to do the conversion was > > ``` > pkfix-helper \ > --verbose \ > --verbose \ > --ps=schintro-bitmaps-before.ps \ > --tex=schintro-bitmaps-after.tex \ > --force Fh="cmbx12 @ 1.095X" \ > --force Fj="cmsl10 @ 1.095X" \ > --force Ff="cmtt12 @ 1.095X" \ > --force Fg="cmr7" \ > --force Fa="cmmi12 @ 1.2X" \ > --force Fb="cmmi9" \ > --force Fi="cmmi10 @ 1.095X" \ > schintro.ps \ > fix1.ps \ > &> schintro-bitmaps.pkfix-helper.log \ > && pkfix fix1.ps fix2.ps \ > && ps2pdf fix2.ps schintro.pdf > > rm -f fix1.ps fix2.ps > ``` > > (The used programs are part of TeXLive.) > > > Werner >
Re: "Structure and interpretation" of Scheme
(and it looks _a lot_ better now) L On Mon, Mar 7, 2022 at 8:08 AM Luca Fascione wrote: > I was wondering how to do exactly this actually :-) > Thanks Werner! > > L > > On Mon, 7 Mar 2022, 08:06 Werner LEMBERG, wrote: > >> >> > https://www.cs.utexas.edu/ftp/garbage/submit/notready/schintro.ps >> > >> > [...] and the rendering is not great. >> >> Attached you can find a PDF version of `schintro.ps` that replaces all >> bitmap fonts in the above documents with real outline fonts. The >> corresponding script to do the conversion was >> >> ``` >> pkfix-helper \ >> --verbose \ >> --verbose \ >> --ps=schintro-bitmaps-before.ps \ >> --tex=schintro-bitmaps-after.tex \ >> --force Fh="cmbx12 @ 1.095X" \ >> --force Fj="cmsl10 @ 1.095X" \ >> --force Ff="cmtt12 @ 1.095X" \ >> --force Fg="cmr7" \ >> --force Fa="cmmi12 @ 1.2X" \ >> --force Fb="cmmi9" \ >> --force Fi="cmmi10 @ 1.095X" \ >> schintro.ps \ >> fix1.ps \ >> &> schintro-bitmaps.pkfix-helper.log \ >> && pkfix fix1.ps fix2.ps \ >> && ps2pdf fix2.ps schintro.pdf >> >> rm -f fix1.ps fix2.ps >> ``` >> >> (The used programs are part of TeXLive.) >> >> >> Werner >> >
Re: How to use LaTeX code from manual to include LilyPond-generated TOC?
I've been asking myself questions about how to do this for a bit... It seems to me most natural the TeX would be having the last word (if you'll look after the lame pun there), and thereby lilypond should indirect somewhat its internal sense of page numbering, so that some negotiation can happen wrt to where things are... It seems to me this discussion is relevant to what I'm trying to say https://tex.stackexchange.com/questions/15989/toc-entries-and-labels-for-included-pdf-pages I can see the posted code tries to do something like this, but it seems there's more to it than this, per Werner's point, with which I agree. I do wonder why it isn't lilypond to emit a more direct piece of LaTeX directly though: I'm imagining a file usable with \include (or so) that would contain appropriate \includepdf command lines generated by lilypond. I don't understand why it is we're generating them in TeX seems like an unhelpful place to do this... Happy to help if you think there would be use Cheers, Luca On Fri, Mar 11, 2022 at 7:21 AM Werner LEMBERG wrote: > > > > %% > > % \includescore{PossibleExtension} > > > %% > > > > [...] > > Ouch, this is ugly LaTeX code. Besides the formatting, it will > exhaust TeX's macro stack, AFAICS, since macro `\readfile@line` is > calling itself recursively, not using the TeX's `\loop` macro (or an > equivalent to it) to ensure proper tail recursion. In other words, > very large TOCs would fail. > > I'll try to cook something up that works decently. > > > Werner > >
Re: Blockers for Guile 2.2
Just wanted to say this is great L
Re: Should \partial accept music instead of duration?
What if you rotate them instead? Rename the current \partial \partialDuration, convert.ly now is just s/partial/partialDuration/ and \partial always takes music from now on It's the same as Werner said, but keeps the good name L On Sun, 20 Mar 2022, 08:24 Werner LEMBERG, wrote: > > > A convert-ly rule would probably not be possible given the > > limited power of regular expressions. As such, \partial might > > need to support both duration and music arguments. Initially I > > thought this might not be possible, given that a naked duration > > can be treated as music; but the following does seem to work: > >> > >> ... > >> I wouldn't want to have to explain to users why these turn out > >> different. > >> \score { > >> \fixed c' { > >> \partial 4. 4. > >> } > >> } > >> \score { > >> \fixed c' { > >> \partial c4. c4. > >> } > >> } > >> > > > > Fair point, though the intention here would be that backwards > > compatibility would only need to exist for a time. A warning could > > be issued whenever a user applies the older syntax; this would > > inform the user of the impending breaking change while still > > allowing existing code to compile. When it is convenient, a future > > release would only support music as the argument. > > What about providing a new command `\upbeat` and moving `\partial` > into oblivion? Compare this to `\tuplet` vs. `\times`. > > > Werner > >
Re: Should \partial accept music instead of duration?
What if instead of `\upbeat` (which is weirdly named when used in the end-of-music/phrase/hymn/passage scenario) this new thing is just called `\partialMusic`? It's backward compatible, does something easy to use in some simple scenarios, leaves everything else in place for more refined use cases, and it's not weird either end of the music L
Re: Slanted Beams thickness
This video shows Hans Kuehner at work https://www.youtube.com/watch?v=BvyoKdW-Big at 4m36 shows beams being engraved, he appears to keep the instrument orthogonal to the line direction, which makes Valentin's formula appropriate to capture this process. (I love it when it goes "What happens when you make mistakes?" -> "I _don't_ make mistakes!" (7m59 or so) ) As Werner said, I'd have expected something more of a halfwayhouse, because in my mind I was expecting more or a nib pen feel to this, but even looking at photography based processes there seems to be no evidence that any of that technique would influence this. I feel that for more organic looking fonts (such as lilyjazz) this might want to change, but I guess that's a somewhat different topic. L On Fri, Mar 25, 2022 at 8:10 AM Jean Abou Samra wrote: > Le 25/03/2022 à 01:44, Valentin Petzel a écrit : > > Hello, > > > > Lilypond handles slanted Beams in a geometrically weird way, that is, the > > thickness is not measured as the shortest distance between the opposing > sides > > of the boundary, but as vertical distance. This results in Beams getting > > optically thinner and closer the higher the slope is. But we can very > easily > > factor this out by adjusting the thickness to the slope. In fact if we > want to > > achieve a real thickness theta the adjusted thickness would need to be > > theta·sqrt(1 + slope²). See attached an experimental example. > > > > Did you look into engraving literature to back this up? > Given the amount of effort put by Han-Wen & Jan in beam > formatting, I have trouble imagining this being just > an oversight. > > Jean > > >
Re: Slanted Beams thickness
Sorry, forgot to say: instead of correcting with 1/cos(\theta) I wonder if correcting with 1/cos(\theta/2) would be an idea? sl2 = sl / (1+sqrt(1+sl*sl)) // tan(\theta/2) th *= sqrt(1+sl2*sl2) HTH L On Fri, Mar 25, 2022 at 9:35 AM Luca Fascione wrote: > This video shows Hans Kuehner at work > > https://www.youtube.com/watch?v=BvyoKdW-Big > > at 4m36 shows beams being engraved, he appears to keep the instrument > orthogonal to the line direction, > which makes Valentin's formula appropriate to capture this process. > > (I love it when it goes "What happens when you make mistakes?" -> "I > _don't_ make mistakes!" (7m59 or so) ) > > As Werner said, I'd have expected something more of a halfwayhouse, > because in my mind I was expecting more or a nib > pen feel to this, but even looking at photography based processes > there seems to be no evidence that any of that technique > would influence this. > > I feel that for more organic looking fonts (such as lilyjazz) this might > want to change, but I guess that's a somewhat different > topic. > > L > > On Fri, Mar 25, 2022 at 8:10 AM Jean Abou Samra > wrote: > >> Le 25/03/2022 à 01:44, Valentin Petzel a écrit : >> > Hello, >> > >> > Lilypond handles slanted Beams in a geometrically weird way, that is, >> the >> > thickness is not measured as the shortest distance between the opposing >> sides >> > of the boundary, but as vertical distance. This results in Beams getting >> > optically thinner and closer the higher the slope is. But we can very >> easily >> > factor this out by adjusting the thickness to the slope. In fact if we >> want to >> > achieve a real thickness theta the adjusted thickness would need to be >> > theta·sqrt(1 + slope²). See attached an experimental example. >> >> >> >> Did you look into engraving literature to back this up? >> Given the amount of effort put by Han-Wen & Jan in beam >> formatting, I have trouble imagining this being just >> an oversight. >> >> Jean >> >> >>
Re: Slanted Beams thickness
Yes but look at the took and how it's held in the hand: you won't ever get a clean line from it holding is slanted to the direction of motion, that thing is meant to be pushed straight ahead... On Fri, 25 Mar 2022, 13:19 Dan Eble, wrote: > On Mar 25, 2022, at 04:35, Luca Fascione wrote: > > > > This video shows Hans Kuehner at work > > > > https://www.youtube.com/watch?v=BvyoKdW-Big > > > > at 4m36 shows beams being engraved, he appears to keep the instrument > > orthogonal to the line direction, > > It's fascinating, but those beams are nearly horizontal and Valentin's > concern is about steep beams. > — > Dan > >
Re: Slanted Beams thickness
... which is what Valentin also just said. Sorry Valentin for the double up! L On Fri, 25 Mar 2022, 13:43 Luca Fascione, wrote: > Yes but look at the took and how it's held in the hand: you won't ever get > a clean line from it holding is slanted to the direction of motion, that > thing is meant to be pushed straight ahead... > > > > On Fri, 25 Mar 2022, 13:19 Dan Eble, wrote: > >> On Mar 25, 2022, at 04:35, Luca Fascione wrote: >> > >> > This video shows Hans Kuehner at work >> > >> > https://www.youtube.com/watch?v=BvyoKdW-Big >> > >> > at 4m36 shows beams being engraved, he appears to keep the instrument >> > orthogonal to the line direction, >> >> It's fascinating, but those beams are nearly horizontal and Valentin's >> concern is about steep beams. >> — >> Dan >> >>
Re: Slanted Beams thickness
Carl, If you look at the video I posted, could you explain how you see using that instrument non along its tooling direction? (Like, "diagonally" wrt cutting edge at the tip) seemd to me it would be very hard to get a straight line doing so... L On Fri, 25 Mar 2022, 13:52 Carl Sorensen, wrote: > On Thu, Mar 24, 2022 at 6:46 PM Valentin Petzel > wrote: > > > Hello, > > > > Lilypond handles slanted Beams in a geometrically weird way, that is, the > > thickness is not measured as the shortest distance between the opposing > > sides > > of the boundary, but as vertical distance. This results in Beams getting > > optically thinner and closer the higher the slope is. But we can very > > easily > > factor this out by adjusting the thickness to the slope. In fact if we > > want to > > achieve a real thickness theta the adjusted thickness would need to be > > theta·sqrt(1 + slope²). See attached an experimental example. > > > > I think LilyPond handles beams not in a geometrically weird way, but in a > geometrically correct way. > > If I understand correctly, I think that the slanted beams are defined not > by the perpendicular thickness, but by the vertical "thickness", and that > this is intentional. > > When the end of a beam sits on a staff, it should take up a fixed > percentage of the staff space, which we call the beam thickness. In > actuality, it is not the perpendicular thickness of the beam (the dimension > perpendicular to the beam center line) but the vertical thickness (the > dimension perpendicular to the staff lines. Of course, this does lead to a > reduced perpendicular thickness, which might be considered the optical > thickness. > > This models hand engraving, where chisels of a fixed width were used, and > the chisels were always held with the ends perpendicular to the staff > lines, so that the ends of the beams were vertical. > > If we want to have a setting to change that, I'm fine. But I don't think > we should change the default, without strong evidence from good > hand-engraved scores that this is the proper way to do it. > > The same is true of beam spacing. Beam spacing needs to match the vertical > staff spacing, not the perpendicular spacing. Lilypond uses beam quanting > to make sure that the beams interact properly with the staff lines. > > I note that Dorico offers "optical beaming" for slanted beams, but can't > find any discussion of it. > > Thanks, > > Carl >
Re: Scheme pattern for retrieving data in objects
I can see this happening, I thought a bit about it and it seems to me there are several factors that are potentially at play here: - one is how much alignment there is between the API presented to (or rather, *perceived* by) the user vs their mental model of what is going on in the program: the more distant these two are, the harder a time the user will have, and whatever can be made to reduce this gap will feel good to them. Besides, I'd imagine for most people the library shipped with lilypond is at least one step removed from their problem space and vocabulary, and the intervening adaptation layer is likely to feel like a burden to people. I guess I'm saying that it seems to me lilypond is a bit like plain TeX and there are many folks that find a layer like LaTeX is better aligned with their mental models and how they prefer to reason about their content - I'm unclear how technical or interested in coding the lilypond userbase is: it seems to me that all folks that come to lilypond because the sheets look great are not necessarily inclined to learn how to program to achieve their goals, they're after the goal, and the software engineering aspects of this provide no joy to them. Again, you see something similar in TeX/LaTeX users as well - I myself still find scheme weird: this seems to be largely because it's just different from everything else I'm familiar with both from a "look" perspective, but much more importantly from a semantics perspective. I am aware that as a descendent of Lisp it predates a lot of the common languages of today, and in a number of ways it's rather more elegant and better conceived as a language, I am just stating that it's *different*. I happen to be a very driven person, so I'll muscle through that to get to the sheets and sheet-making process I want, but I don't know how many other people have a taste for this or find it enjoyable, I'd imagine most folks just find it annoying instead. And especially for folks that do this outside of their work time, being impeded in one's goals for reasons that bring along an advantage that one may not find material, is unlikely to feel like a good state of things. Point being: scheme is very different, and occasionally very subtly so, from languages like python, ruby, javascript and all the everyday, garden variety stuff folks are likely to see at work, I don't know how many people will find this as positive trait, vs how many would find it an undesirable state of things I totally see wha Han-Wen is saying and I agree that a lateral-shift kinda thing is not going to be that useful in the long term. However I wrote this to express why I don't think all these changes necessarily belong in that category, I see clear cases where plenty of value is provided to users instead. Luca On Sat, Apr 2, 2022 at 2:50 PM Werner LEMBERG wrote: > > >> Over the years, I've become extremely wary of syntactic sugar: it > >> adds an extra barrier to usage/development because everyone not > >> only has to learn Scheme, they also have to learn the (lilypond > >> specific) idioms involved. > > > > I'm curious you say that, since my experience is precisely the > > opposite: I've had far better results "selling" Lilypond to people > > using syntactic sugar than basically anything else I can > > identify. The people I’ve "converted" all want to be able to type > > things like > > > >\reverseMusic \foo > > > > rather than learning how to write the equivalent function in > > Scheme. In other words, syntactic sugar keeps them from learning > > Scheme as opposed to having to learn it. > > > > Am I missing something? Is my experience unique? > > No, your experience is not unique. I think that developers and normal > users (but probably not Scheme wizards) have rather diametral views on > this topic. > > > Werner > -- Luca Fascione Distinguished Engineer - Ray Tracing - NVIDIA
Re: C++ question on wrapper API for setting Guile fluids
On Thu, Apr 21, 2022 at 8:12 AM Jean Abou Samra wrote: > Le 21/04/2022 à 04:57, Dan Eble a écrit : > > { > >// dwc constructor calls scm_dynwind_begin () > >Dynwind_context dwc; > >scm_dynwind_fluid (fluid1, value1); > >scm_dynwind_fluid (fluid2, value2); > >. . . > >// dwc destructor calls scm_dynwind_end () > > } > > > > Why not. There is likely just one caller that will need to introduce > several > fluids at once, but it is probably clearer this way. > I'd think you can up this by one, and get a cleaner looking piece of code if you implement scm_dynwind_fluid() as a forwarded method on your context: { Dynwind_context dwc; // overloaded so you can call dwc(SCM_F_DYNWIND_REWINDABLE); dwc.fluid (fluid1, value1); dwc.fluid (fluid2, value2); dwc.unwind_handler(handler, data, flags); // overloaded for the SCM vs void* cases, dwc.rewind_handler(handler, data, flags); // overloaded for the SCM vs void* cases, // maybe it ought to check if the constructor was DYNWIND_REWINDABLE and complain // (only in debug builds?) if things are not set up right dwc.free(something); } However in all this, I must say I don't understand this passage in manual: The context is ended either implicitly when a non-local exit happens, or explicitly with scm_dynwind_end. You must make sure that a dynwind context is indeed ended properly. If you fail to call scm_dynwind_end for each scm_dynwind_begin, the behavior is undefined. It seems to me the first phrase and the rest slightly contradict each other: as I hear it, the first phrase says "either the C side calls scm_dynwind_end(), OR a non-local exit happens", whereas the rest seems to be saying "the C side _shall_ call scm_dynwind_end". This bothers me, because in the second case our C++ is nice and lovely, but in the first meaning the destructor of dwc has to somehow figure out whether a non-local exit has happened, and avoid calling scm_dynwind_end(). And I don't think scm library can cope on its own, because these things look to me like they would nest, so making scm_dynwind_end() idempotent without some sort of explicit marker on the scope seems... hard. So yes, I'd think RAII is the idiomatic way to go, I would add the wrappers because they make the pattern cleaner, but do figure out what's up with this last question first, because it could bring it all crumbling down. HTH Luca
Re: C++ question on wrapper API for setting Guile fluids
I wonder if you can 'chain' them: Dynwind_context dwc2(dwc); (you can at a minimum 'pause' dwc so to emit a runtime message that the "wrong" thing is happening, but I guess you could hand yourself to your parent and do more sophisticated shenanigans too... I must say I loathe this stateful stuff though) L On Thu, Apr 21, 2022 at 12:23 PM Dan Eble wrote: > On Apr 21, 2022, at 02:55, Luca Fascione wrote: > > > > I'd think you can up this by one, and get a cleaner looking piece of code > > if you implement scm_dynwind_fluid() as a forwarded method on your > context: > ... > > dwc.fluid (fluid2, value2); > > Here's something that does bother me. That reads as if dwc holds state > affecting the outcome, which is untrue. > > { > Dynwind_context dwc; > // . . . > > { > Dynwind_context dwc2; > // . . . > dwc.free (p); // not what it seems > // . . . > } > } > > The scm_ functions operate implicitly on the current context. Dressing > them differently would be confusing. > — > Dan > > -- Luca Fascione Distinguished Engineer - Ray Tracing - NVIDIA
Re: C++ question on wrapper API for setting Guile fluids
On Thu, Apr 21, 2022 at 11:46 PM Jean Abou Samra wrote: > Well, the C++ and Scheme interfaces can feel different and idiomatic > in their respective languages as long as they share the same > underlying implementation. > I think this is a super important goal. In fact, I'd upgrade 'can' to 'should' :-) > In the same vein, although Guile has scm_dynamic_wind (), the > recommended C interface is scm_dynwind_begin () / scm_dynwind_end () > because that is more scalable in C code. > Yes, and that supports your idea of using RAII to bookend it correctly in the face of exceptions and such things. Which is very idiomatic in modern C++ and feels quite natural to me. Dan had an objection about this case: { Dynwind_context dwc; // . . . { Dynwind_context dwc2; // . . . dwc.free (p); // not what it seems // . . . } } Where the problem is that dwc.free(p) is actually effectively acting as if it was dwc2.free(p); because the API doesn't pass around the context like the C++ wrappers appear to do, rather it statefully "just goes for it". This is a design decision of Guile, obviously. However, it seems to me this has possibly uncommon semantics even if it were implemented on the scheme-language side of things anyways, doesn't it? I guess I'm asking whether people would want/need to do this on purpose, and why. It seems to me to achieve this behaviour one would have to capture the dwc context and then invoke dwc.free() on that context from deeper inside its own stack, and I'm not clear what this means for the frames intervening inbetween the calling frame and the callee frame, being a current parent of it, maybe it's ok, I normally think of continuations as things to deal with a stack that has been unwound, not deeper parts of a stack that is still being executed. Whereas in a way this is what {} scoping does, it's weird that there could be intervening function calls and this would still be allowed, because this second scenario is closer to implementing TCL's upvar/uplevel mechanism, which I've learned Schemers really don't like. For these reasons I suspect that Dan's example can easily happen by mistake, but wouldn't be something that folks have a legitimate, every-day use for, or is it? Anyways I was reading the source of Guile from the repo, and I found that dynwind_begin() does this: void scm_dynwind_begin (scm_t_dynwind_flags flags) { scm_thread *thread = SCM_I_CURRENT_THREAD; scm_dynstack_push_frame (&thread->dynstack, flags); } So I'm thinking that (maybe in some appropriate debug mode) if Dynwind_context were to capture a copy of &thread->dynstack, it could then check if the stack has moved or not, and error out in some useful way if the (dyn)stack pointer has changed. Given C++ semantics, I suspect it can only change by being deeper, but one way or the other, this could work. Mind you, as much as the proposed RAII idea is kinda slightly dangerous, I don't think we should protect too much for folks doing irresponsible things with the stack, as long as the documentation is clear about how the wrapper works, this should be fine. On the other hand, if this concept of messing with dwc from a stack frame deeper inside is important, dynstack.h has all controls for capturing and standing up local stacks, so one could instead do a full capture in the constructor (say calling scm_dynstack_capture_all, or something like that) to allow for this behaviour (and then the forwarding methods would adjust the dynstack to always be pointed as expected). It seems a bit overkill to me, because I'm not used to the semantics implied by doing this, but all the same it appears this might work. -> footnote: these arguably all appear to be internal API entrypoints, I'm not sure what this means in terms of it being a polite thing or not to call them from lilypond's codebase. Earlier in the thread Jean said: > Making a method of the context is an attractive proposal, as > it prevents from forgetting to introduce a context. > > Actually, the specific use case is much narrower than this particular > set of methods: I don't need C++ wrappers for all dynwind-related Guile > APIs, I just have one need for fluids. The example in the first email was > a straw man for the sake of explanation. I think the first observation is very good: making it easiest to spot wrong code without paying large amounts of attention has been a very good strategy for me in the past. And for the same reason, assuming the scm_dynwind_XXX set of calls is small, I would either wrap all of them (because they're all small and easy) or very clearly document the wrapper class as "if you need this method, add it here and this is how": you definitely don't want to find yourself in a halfway house where there are all sorts of exceptions that "yes these three methods could have been wrapped but we didn't" and now nobody remembers any longer why a decision was taken one way or the other. It's a sm
Re: Quotes around \consists argument?
On Mon, Apr 25, 2022 at 10:47 AM David Kastrup wrote: > it seems somehow wrong to see stuff without quotes > that has not previously been defined. > Actually, while I think I follow why you're saying this, it's been my experience (both as a user and as a provider of software) that most people find it difficult to rejoin it in their head that the same entity flows quoted in some contexts and unquoted in others. In other words: imagine we had a prefix form for assignment, namely \assign, same meaning as the current operator '='. I'd think folks would find it weird to have \assign "theanswer" 42 and then dereference it as \theanswer as opposed to having \assign theanswer 42 as the assign/define statement. I think this is because it being an unquoted string (PERLfolk call these barewords) makes it feel more like an identifier, even if technically it's a string (for the time being). It also has some advantages for example it's easier to work with names being always unquoted when you're grepping for them, kinda scenario I think things that work like language-level names, for this reason, should flow unquoted, even if this requires effectively supporting barewords. L -- Luca Fascione Distinguished Engineer - Ray Tracing - NVIDIA
Re: Quotes around \consists argument?
Yes I underground that, I was meaning for person's mental parsers, it helps that tokens (in an informal sense) always look the same L On Mon, 25 Apr 2022, 14:01 David Kastrup, wrote: > Luca Fascione writes: > > > I think this is because it being an unquoted string (PERLfolk call these > > barewords) makes it feel more like an identifier, even if technically > it's > > a string (for the time being). > > It also has some advantages for example it's easier to work with names > > being always unquoted when you're grepping for them, kinda scenario > > > > I think things that work like language-level names, for this reason, > should > > flow unquoted, even if this requires effectively supporting barewords. > > LilyPond's input syntax treats strings and unquoted words identical in > most circumstances: essentially you can use quote marks to have things > interpreted as a single word that would otherwise be split into several > syntactic entities or be considered a notename (for example). > > -- > David Kastrup >
Re: Quotes around \consists argument?
*understood, of course On Mon, 25 Apr 2022, 14:14 Luca Fascione, wrote: > Yes I underground that, I was meaning for person's mental parsers, it > helps that tokens (in an informal sense) always look the same > > L > > On Mon, 25 Apr 2022, 14:01 David Kastrup, wrote: > >> Luca Fascione writes: >> >> > I think this is because it being an unquoted string (PERLfolk call these >> > barewords) makes it feel more like an identifier, even if technically >> it's >> > a string (for the time being). >> > It also has some advantages for example it's easier to work with names >> > being always unquoted when you're grepping for them, kinda scenario >> > >> > I think things that work like language-level names, for this reason, >> should >> > flow unquoted, even if this requires effectively supporting barewords. >> >> LilyPond's input syntax treats strings and unquoted words identical in >> most circumstances: essentially you can use quote marks to have things >> interpreted as a single word that would otherwise be split into several >> syntactic entities or be considered a notename (for example). >> >> -- >> David Kastrup >> >
Re: LSR and Documentation/snippets/new
Fwiw, I like it, there's all sorts of weird edge cases in there that on occasion are quite handy L On Sat, 7 May 2022, 11:45 Sebastiano Vigna, wrote: > > > On 7 May 2022, at 09:30, Jean Abou Samra wrote: > > > > - What is the LSR's bus factor? As far as I can see, 1, > > since while Sebastiano was inactive it remained stuck > > in 2.18 (I for one thought it was dead and buried, > > I stopped interacting with it because making snippets > > compatible with 2.18 was extra work, and I'm seldom > > seeing links to it on mailing lists these days), > > True, my fault. I'll try to be more responsive in the future. In this > round I have revamped a lot of things that I was terrified to touch with a > ten foot pole, as I thought they would break (it's been almost 20 years). > > > > > - What can be done to increase this bus factor? > > Well, if you see it's not really used, we can make it go to infinity by > shutting it down... > > Ciao, > > seba > > >
Re: GDB giving immediate segfault on LilyPond startup?
While trying to see if I could help out Jean, I found this piece of documentation about libgc: https://github.com/ivmai/bdwgc/blob/master/doc/debugging.md BDWGC is the project name for libgc.so Quoting from that page: If the fault occurred in GC_find_limit, or with incremental collection > enabled, this is probably normal. The collector installs handlers to take > care of these. You will not see these unless you are using a debugger. Your > debugger should allow you to continue. It's often preferable to tell the > debugger to ignore SIGBUS and SIGSEGV ("handle SIGSEGV SIGBUS nostop > noprint" in gdb, "ignore SIGSEGV SIGBUS" in most versions of dbx) and set a > breakpoint in abort. The collector will call abort if the signal had > another cause, and there was not other handler previously installed. It's possible this will be enough for debugging our Guile-based application. However, just in case the original page might move, there is a second mechanism to try if the above proves unsuitable: If the application generates an unhandled SIGSEGV or equivalent, it may > often be easiest to set the environment variable GC_LOOP_ON_ABORT. On many > platforms, this will cause the collector to loop in a handler when the > SIGSEGV is encountered (or when the collector aborts for some other > reason), and a debugger can then be attached to the looping process. This > sidesteps common operating system problems related to incomplete core files > for multi-threaded applications, etc. I don't think lilypond requires this deeper level of trickery, but just in case. HTH, Luca On Wed, May 18, 2022 at 1:08 AM Jean Abou Samra wrote: > > > Le 17/05/2022 à 13:06, Jean Abou Samra a écrit : > > Hi, > > > > After upgrading to Ubuntu 22.04 LTS, I can no longer use GDB > > with LilyPond, although it runs fine outside of GDB. > > [...] > > > Thanks to private replies, I have learnt that this is apparently > expected, and it works to type "continue" when this segfault > appears. > > > -- Luca Fascione
Re: GDB giving immediate segfault on LilyPond startup?
On Wed, May 18, 2022 at 9:59 PM Jean Abou Samra wrote: > Le 18/05/2022 à 13:54, Luca Fascione a écrit : > > > > Quoting from that page: > > [...] > > The collector will call abort if the signal > > had another cause, and there was not other handler previously > > installed. > > > > That did prevent GDB from stopping on startup with SIGSEGV, but > it also prevented it from halting when the segfault I was actually > interested in was encountered. > If that's the case, the breakpoint you put in the collector, where it calls abort, is not triggering. One way to set up that breakpoint is: - use the 'continue' method you discovered and run your program - let it crash in your code - go up a frame or two, until you're in the frame that calls 'abort()', which is in the signal handler code of libgc, and set a breakpoint there (*) - call the commands above that make SIGSEGV quiet - call 'run ...' again (GDB will say 'should I restart the prg completely?', and you'll go: "yeah sure". The breakpoints stay where you had them) Now it'll trigger the breakpoint at the real SIGSEGV, and you can go up your stack a few frames and see what's going on. Profit. (*) this position in the code will not move, so it might be worth documenting it in lilypond's debugging procedure manual, libgc appears to be a very stable piece of code at this point. HTH, L -- Luca Fascione
Building lilypond on osx
Hi, I'm setting up to build lilypond on osx and I'm trying to keep tidy notes so we might add them to the CG once this is all worked out. At this time I'm using homebrew as the package manager to install dependencies. OSX installations come with some of the packages lilypond needs available as older versions than required, but the system version cannot be changed. To deal with this, homebrew introduced the concept of keg-only packages, that are made available only for compilation work (homebrew likes to build on the user machine if it can). As I'm working through setting up the parameters to pass to configure I was wondering what the recommended approach is to tell the current build system to use (for example) /usr/local/Cellar/bison/3.8.2/bin/bison in lieu of /usr/bin/bison (I also need to repoint flex and maybe gettext, gettext is strange) Thanks for your help Luca -- Luca Fascione
Re: Building lilypond on osx
Thanks a lot Jonas, this is very helpful On Thu, May 19, 2022 at 8:29 PM Jonas Hahnfeld wrote: > The usually recommended way (and that's also what I use to build the > packages for macOS) is export PATH="$(brew --prefix bison)/bin:$PATH" > in the session before configuring. So I can rely on the build system capturing the resolved path to bison during configure, like it would for CXX/CXXFLAGS? Good to know, thanks > You might also set specific env variables (whose names you'd need to look > up in the configure script), > but changing PATH usually works fine. > I did look in the configure help, I couldn't find an env var to adjust how bison is resolved. I am passing in directories to resolve the fonts, for example (say --with-texgyre-dir=/usr/local/Cellar/texlive/58837_1/share/texmf-dist/fonts/opentype/public/tex-gyre/) There's quite a few to specify linkage behaviour that I did find though. Can't say that I fully understand why we need both URW and Gyre fonts, I might ask Werner at some point. > > (I also need to repoint flex and maybe gettext, gettext is strange) > > Not sure, flex should be sufficient on macOS > It is. This was me misreading my notes. Sorry for the mixup. Texinfo seems like it'll need some love (mac seems to have 4.8 and we need 6.1 if I read the docs right), I'll try the same $(brew --prefix texinfo)/bin into PATH like you suggested for bison Guile also seems awkward: checking for guile-2.2 >= 2.2.0... no <... snip ...> checking for guile... guile checking guile version... 3.0.8 (FWIW I have both 2.2.7 and 3.0.8 installed: % ll /usr/local/opt/guile* lrwxr-xr-x 1 x x 21 Feb 27 18:37 /usr/local/opt/guile -> ../Cellar/guile/3.0.8 lrwxr-xr-x 1 x x 25 Jan 16 18:18 /usr/local/opt/guile@2 -> ../Cellar/guile@2/2.2.7_1 lrwxr-xr-x 1 x x 21 Feb 27 18:37 /usr/local/opt/guile@3 -> ../Cellar/guile/3.0.8 ) Where is the testing/detection for Guile set up, roughly? Thanks again, L -- Luca Fascione
Re: Building lilypond on osx
On Fri, May 20, 2022 at 4:31 PM Jonas Hahnfeld wrote: > On Thu, 2022-05-19 at 21:50 +0200, Luca Fascione wrote: > > So I can rely on the build system capturing the resolved path to > > bison during configure, like it would for CXX/CXXFLAGS? > > [...] you can set all-caps variables of > the programs you want to specify, so for example BISON. > Cool, I'll make sure I understand how the capturing of the configure-time values works and use this mechanism then, sounds easy and clean enough. > > Can't say that I fully understand why we need both URW and Gyre > > fonts, I might ask Werner at some point. > > URW++ and TeX Gyre are LilyPond's default text fonts. The build system > uses their paths to directly install them alongside LilyPond. > I do follow the rationale for shipping one of the sets, what I'm confused about is why _both_, they're the same font set, afaiu (semantically, at least) > > Where is the testing/detection for Guile set up, roughly? > > If you still want to check, it's in aclocal.m4 - the functions > STEPMAKE_GUILE and STEPMAKE_GUILE_DEVEL. > Great, thanks Jonas. As a side question, may I ask: I thought I saw commits pass in Colin's emails that were indicating GUB and stepmake were being removed, did I misunderstand what the plan is? Is there a place I can get myself up to speed on this? (just curious, don't have much of an opinion one way or the other atm) Many thanks again Jonas, this was very useful to me, L -- Luca Fascione
Re: Point an Click & emacs
Jean, I think this is a BWV1079... L On Sat, May 21, 2022 at 9:28 PM Jean Abou Samra wrote: > > > Le 21/05/2022 à 20:56, Immanuel Litzroth a écrit : > > Today I found out it's quite easy to get point and click working > > with emacs as pdf viewer when using the pdf-tools package. > > 1. Install pdf-tools: https://github.com/politza/pdf-tools > > It's available as a package in emacs. > > > > 2. put this in your .emacs config > > (defconst lilypond-filename-rx > >(rx (seq > > string-start > > (group (1+ print)) > > ":" > > (group (1+ digit)) > > ":" > > (group (1+ digit)) > > ":" > > (group (1+ digit)) > > string-end))) > > > > (defun lilypond-pdf-links-browse-uri-function (uri) > >"Check if the link starts with texedit and just reroute to emacs. > > Otherwise call pdf-links-browse-uri-default." > >(cl-check-type uri string) > >(let* ((obj (url-generic-parse-url uri)) > > (match (string-match lilypond-filename-rx (url-filename obj > > (unless match > >(message "Could not match %s" (url-filename obj))) > > (let* ((filename (match-string 1 (url-filename obj))) > > (line (string-to-number (match-string 2 (url-filename obj > > (pos (string-to-number (match-string 3 (url-filename obj > > (buf (or (find-buffer-visiting filename) > > (find-file-noselect filename > >(pop-to-buffer buf) > >(goto-char (point-min)) > >(forward-line (1- line)) > >(forward-char pos > > > > > > (setq pdf-links-browse-uri-function > 'lilypond-pdf-links-browse-uri-function) > > 3. Open a lilypond generated pdf with \PointAndClickOn and click away. > > > > The code might need some refining but it does work here quite well. > > Immanuel > > > Hello, > > What kind of answer to this post do you await? > > Best, > Jean > > > -- Luca Fascione
Re: Point an Click & emacs
Maybe we could see if we can rope Immanuel to contribute a short segment to the user docs? L On Sun, 22 May 2022, 14:17 Jean Abou Samra, wrote: > Le 21/05/2022 à 22:38, Luca Fascione a écrit : > > Jean, I think this is a BWV1079... > > > :-) > > If the goal is to make this method known to other users, > the lilypond-user list would be a better place. > > Best, > Jean > >
Re: Building lilypond on osx
Yes Werner also explained the similar issue with the cyrillic not being in Gyre, there seems to be some licensing hiccup there with the original author or the cyrillic set, so they can't be transported into Gyre, not sure of the specifics though L On Sun, 22 May 2022, 14:21 Jean Abou Samra, wrote: > Le 21/05/2022 à 07:49, Luca Fascione a écrit : > > I do follow the rationale for shipping one of the sets, what I'm > > confused about is why _both_, they're the same font set, afaiu > > (semantically, at least) > > > > Cf. > > commit 500febd2a5fe0ebdf0383fa034e5571671e57a6a > Author: Daniel Benjamin Miller > Date: Sun Jun 21 14:57:51 2020 -0400 > > Use URW fonts instead of TeX Gyre as the default > > These fonts are typographically preferable and include support for > Greek and Cyrillic as well as Latin (Greek support had defaulted to > the > unusable Gyre Greek instead). The Gyre fonts are kept as a fallback > before going to the DejaVu substitute (mainly for Vietnamese support). > > >
Re: Guile 3.0
I would like to bring up an option that I'd expect fair few of you will _really_ not like. I'm doing this not because I necessarily believe it to be a particularly good way forward, rather because I feel it is sometimes useful to articulate in words why an "obviously awful idea" is, in fact, awful. Or maybe it isn't So here it goes: what if we shipped the guile source? (I mean, in a subdirectory of lilypond's source) At a quick look, it doesn't seem to be particularly hard to build (comparatively speaking), and its dependencies are mostly fairly benign stuff that appear to be runtime dependencies anyways (like libgc and such, point being, you need to install all that when you install guile anyways; if I read it right only 'gnu-sed' is a compile-only dependency). One benefit I see is that we could ship it as vX.Y.X + a single patch, so that it fundamentally documents what we've done to it (albeit git history could also serve this purpose). As an example, it seems to me that for 3.0.8 we'd want Jean's patch about the line number reporting in errors, and I recall a discussion about storing precompiled scripts that at that point we might consider fixing in the guile source ourselves. I am confused in this second whether this compiled scripts issue is problematic in 2.2.x also or not. I'm saying this because it seems to me that, different from python, there's no real collaboration/interaction between the guile installed on one's system and the interpreter inside lilypond, and this would completely decouple us from the rest of the distribution's concerns about which version of guile to ship (and seen some distro use lilypond as an element of deciding the version it's possible they wouldn't ship it at all if it wasn't for lilypond?) Of course I'm seeing this as a possibly viable "lesser evil" kinda approach only because the evidence we have is that the project activity is minimal and their interest in addressing our needs (as exemplified by the post Jean's shared) is also near-zero. Which implies: we unblock ourselves without taking on this huge burden of staying caught-up with the upstream (and as an added "bonus" we also can decide to apply someone else's patches should they fix a problem we have) L On Sun, May 22, 2022 at 5:42 PM David Kastrup wrote: > Jean Abou Samra writes: > > > Le 22/05/2022 à 17:04, David Kastrup a écrit : > >> [...] > >>> Also see > >>> > >>> guile$ git shortlog -ns --since="2 months ago" > >>> 2 Timothy Sample > >>> 1 Ludovic Courtès > >>> 1 Mikael Djurfeldt > >> Well, it's the stable release branch. > > > > > > What would be the development branch? > > > > $ git branch --show-current > > main > > $ git shortlog -ns v3.0.8..HEAD > > 6 Ludovic Courtès > > 2 Timothy Sample > > 1 Andy Wingo > > 1 Mikael Djurfeldt > > 1 Rob Browning > > 1 Sergei Trofimovich > > 1 Vijay Marupudi > > I am not in control of Guile's development models or lack thereof. I > simply reported the information on their website that distributors would > go by. > > >> [...] > >> No, Guile is in trouble then. I mean, it is in trouble now. But if > >> distributors can easily do version-hopping on their own initiative and > >> end up with one version of Guile they are going to ship for their whole > >> distro, it would be good if that does not end up in making LilyPond > >> disappear. That's all. > >> > >> What we _recommend_ and use ourselves is an entirely different matter. > > > > > > OK, but in that case, what is your request concretely? > > Current LilyPond master works with Guile 3.0. > > That's essentially all. I wasn't sure of that from the discussion and > from what I remembered from previous exchanges. > > > Do you want to add it to the CI? > > I am afraid that I am not tracking the development of Guile and the CI > resources of LilyPond well enough to venture any opinion that would be > more qualified than that of the current developers. > > -- > David Kastrup > > -- Luca Fascione
Re: Guile 3.0
On Sun, May 22, 2022 at 8:02 PM David Kastrup wrote: > What do you mean with "shipped"? I mean that when you clone the lilypond repo you'd find one more directory, say guile-2.2.7+/ or guile-3.0.8+/ or something like that. In fact we'd likely end up compiling a slightly different version thereof, as I was saying, because we'd apply a patch or two before building > I don't > think it makes sense with stuff that is supposed to be up to date with > current versions. > I would normally definitely agree with you if these two conditions were met: - the project was evolving at a non-geologic speed - there was evidence that if we encountered a problem, they would assist us I mean: the guile subsystem is the heart of lilypond, being held ransom (or rather, completely ignored) by a group of people that seems to show no interest in our issues is not what I'd considered a strategy of growth and fruitful collaboration. Besides: you say "current" version, but we're in this thread exactly because we can't, in fact, use the current version. (By a mile, I think the only one that works well is 2.2.7, right?) This hodgerypockery would at least give us true current, but we'd have to patch it, then again we'd be in a position where we _can_ patch it. So at the cost of rocking the cage a bit hard, I came asking the uncomfortable question: what would happen if (for this unique circumstance) we'd do what one would normally consider poor practice? L -- Luca Fascione
Re: Guile 3.0
On Sun, May 22, 2022 at 9:05 PM Jonas Hahnfeld wrote: > On Sun, 2022-05-22 at 20:14 +0200, Luca Fascione wrote: > > So at the cost of rocking the cage a bit hard, I came asking the > > uncomfortable question: > > what would happen if (for this unique circumstance) we'd do what one > > would normally consider poor practice? > > Let's call your proposal by its true, scary name: we would essentially > *fork* Guile and, in the longer term, make it fit exactly what we need > for LilyPond. Well, I was not thinking the delta between "true" Guile and "ours" would ever get big. If it did, that is what I'd call a fork. And no, I'm _not_ advocating that. I'm more thinking something along the lines of make 3.81, actually: largely "mainline", plus a few small patches to adjust little details. (*) It's clear that a real fork is not useful here, I agree with you completely (*) 3.81 is famously incompatible with 3.82, and many large make systems are stuck in .81 land. So it's common for folks in that condition to build their own make applying a few (3? maybe 4) small patches to fix a few problems with the program instead. Effectively they run some kind of 3.81+ The second implication is that we get technologically stuck. Well, the idea is that much like now you'd state a dependency against Guile 2.2.x, you would then just ship the version you want. I don't see much of a difference there. (Again, the key in mind is that the changes from us are a _small_ set, so the fact that we would on occasion change the base checkout and reapply the diffs should be a small overhead. If you compound this with changed built to maximize the chances of eventual adoption, you'd risk eventually getting into a place where these changes are zero) > With all that said, I think there are good reasons why things are > considered bad practice. There were what felt like good reasons at the time when the practices were established, yes. However, as the hypotheses mutate, it can come a point where the conclusions don't follow any longer. For example: one doesn't want to fork because - it duplicates code and it's difficult to keep up the two diverged branches -> with RCS _definitely_, with git I'm not so sure. Space cost of duplication these days is zero. Time taken compiling one more repo, also zero. -> this means that now this part of the reason is reduced to understanding whether the divergence is large or small I've seen things that were bad practice when I was a student become acceptable or recommended now. I've come to see that "old wisdom" is sometimes not that wise after all (premature optimization for example, although in that case that's more an issue of misunderstanding of the original meaning, than actual change). > Similar discussions have already taken place > before, and I'm not sure if we're adding value by repeating them. Maybe > we can come back to the original topic of this thread? > Forgive me for making poor use of your and the others' time. I thought this might turn out to be pertinent if it opened up new ways of thinking about this choice. L -- Luca Fascione
Re: Guile 3.0
This also makes a lot of sense to me, yes. L On Mon, 23 May 2022, 13:12 Jean Abou Samra, wrote: > > > Le 22/05/2022 à 21:52, Luca Fascione a écrit : > > > > On Sun, May 22, 2022 at 9:05 PM Jonas Hahnfeld wrote: > > > > On Sun, 2022-05-22 at 20:14 +0200, Luca Fascione wrote: > > > So at the cost of rocking the cage a bit hard, I came asking the > > > uncomfortable question: > > > what would happen if (for this unique circumstance) we'd do what > one > > > would normally consider poor practice? > > > > Let's call your proposal by its true, scary name: we would > essentially > > *fork* Guile and, in the longer term, make it fit exactly what we > need > > for LilyPond. > > > > > > Well, I was not thinking the delta between "true" Guile and "ours" > > would ever get big. > > If it did, that is what I'd call a fork. And no, I'm _not_ advocating > > that. > > > > I'm more thinking something along the lines of make 3.81, actually: > > largely "mainline", > > plus a few small patches to adjust little details. (*) > > > > It's clear that a real fork is not useful here, I agree with you > > completely > > (*) 3.81 is famously incompatible with 3.82, and many large make > > systems are stuck in .81 land. > > So it's common for folks in that condition to build their own make > > applying a few (3? maybe 4) small > > patches to fix a few problems with the program instead. Effectively > > they run some kind of 3.81+ > > > > The second implication is that we get technologically stuck. > > > > > > Well, the idea is that much like now you'd state a dependency against > > Guile 2.2.x, > > you would then just ship the version you want. I don't see much of a > > difference there. > > (Again, the key in mind is that the changes from us are a _small_ set, > > so the fact that > > we would on occasion change the base checkout and reapply the diffs > > should be a > > small overhead. If you compound this with changed built to maximize > > the chances of > > eventual adoption, you'd risk eventually getting into a place where > > these changes are zero) > > > > With all that said, I think there are good reasons why things are > > considered bad practice. > > > > > > There were what felt like good reasons at the time when the practices > > were established, yes. > > > > However, as the hypotheses mutate, it can come a point where the > > conclusions don't follow any longer. > > > > For example: > > one doesn't want to fork because > > - it duplicates code and it's difficult to keep up the two diverged > > branches > > -> with RCS _definitely_, with git I'm not so sure. Space cost of > > duplication these days is zero. Time taken compiling one more repo, > > also zero. > > -> this means that now this part of the reason is reduced to > > understanding whether the divergence is large or small > > > > I've seen things that were bad practice when I was a student become > > acceptable or recommended now. > > I've come to see that "old wisdom" is sometimes not that wise after > > all (premature optimization for example, > > although in that case that's more an issue of misunderstanding of the > > original meaning, than actual change). > > > > Similar discussions have already taken place > > before, and I'm not sure if we're adding value by repeating them. > > Maybe > > we can come back to the original topic of this thread? > > > > > > Forgive me for making poor use of your and the others' time. > > I thought this might turn out to be pertinent if it opened up new ways > > of thinking about this choice. > > > > I might have introduced confusion with the mention of the bug > with source locations. In the specific case of LilyPond, this > bug is not very important. Almost all the warnings Guile gives > are pure noise in our case anyway (I haven't taken the time to > prepare an MR for silencing them yet). It was to illustrate the > fact that there can be serious bugs in Guile, and one could > well affect us. > > Apart from that, including the Guile sources in our tree and > building with them is something I would only do in a desperate > situation, e.g. Guile introducing changes that are fundamentally > incompatible with LilyPond's use case. It will
Re: Guile 3.0
On Mon, May 23, 2022 at 9:19 PM Han-Wen Nienhuys wrote: > I'm missing the context for this proposal. > Something in the original thread from Jonas made me think some distros were wrangling keeping up distributing guile only for lilypond's benefit. It's possible I misunderstood and he actually meant that just regarding v 1.8, and that 2.2.x would be available even without lilypond. But the question remains for a time when they'd like to not ship 2.2.x and we're not ready to move to 3.x.y. I do realize neither is very likely and I know Jean showed we work just fine with 3.0.8, actually there are certain advantages there, if you squint at it just the right way. That led me to think that if guile is such a rare dependency, we wouldn't want lilypond to be held ransom, you see. The other half of the thought was a response to Jean's frustration of feeling unable to land a simple bugfix into the guile project: I'm sensitive to that, it's a kind of behaviour that gives me a hard time too. > Shipping dependencies ("vendoring") can be very useful, because it > reduces the combinatorial space of version combinations that you have > to support. Ghostscript does this with a lot of its dependencies (for > example, libpng, IIRC) > Yes, I thought to us there would be the added advantage of being able to apply a small set of fixes, while we wait for mainline to mop them up. If that while is a time period measured in lustra, I thought this could help us stay fresh Jean expressed the hope to instead find a way to help the guile project kick back into some kind of gear, which if achievable I would agree would be a substantially better outcome > Vendoring Guile seems totally impractical. The Guile compilation does > some sort of bootstrapping, which makes building it from scratch > glacially slow (like: O(1 hour)), so it would be impossible for day to > day development work. > Yes, somebody else was saying the same, maybe Jean? I wasn't aware of that, I had just casually browse the repo, where the C part doesn't look big, and assumed it was just about standing that up. It occurs to me recompiling guile without a change in guile itself wouldn't happen very often, it'd still be set up as an external library as seen by the lilypond repo's pov, so maybe this is not the first concern for developers, but it'd certainly be annoying for maintainers and CI, both of which I completely agree would be undesirable burdens. Cheers, L -- Luca Fascione
Re: RFC on MR 1368
objects (they're "all" the pairs of glyphs) - OT feature tables are kinda O(1) objects, with a constant that is probably glyphcount ;) seriously though: I feel they are O(1) objects because they effectively never change, in a time-derivative sense Think about a task where you're adjusting serifs (maybe you're making a slab serif variant of a sans font): 1- you'd touch all glyphs at least once (change n glyphs) 2- you'd touch all the kerning tables (change n^2 pairs) 3- OT feature tables are probably fine as they are I'd imagine that first you'd pick a whole bunch of glyphs, and goof about with their CV's for hours on end, then you'd render out all the pairs you'd need to inspect manually and apply kerning adjustments, and that's another several-hour task then maybe you spend 5 minutes because there's a bug in the features table. To me, this screams as three separate file types. Besides, working as a large team, I can imagine myself as someone that could meaningfully contribute to 2 and 3, but I'd have no idea what to do about 1. So I would feel a lot safer if my changes and tests didn't risk to invalidate or overlap with work from the outline designer. And they may have interest in working on CV's and kerning tables, but maybe are not that interested in the technicalities of feature table setup. And these are the reasons why at this point it seems to me Werner's approach fits the workflow of the users better than Jonas's proposal. HTH, L PS: One thing that I find really distracting is all these python files everywhere, if it is true these are just tables of hand-authored data, I would personally find it easier to wrap my head around to have a data format for them. But I think for what concerns the essence of this discussion this is just a tangent. On Wed, May 25, 2022 at 8:39 AM Han-Wen Nienhuys wrote: > I have had many similarly exhausting discussions before, so I > empathize (it is also the reason that I paused my contributions > recently.) > > I would go with Werner's choices here; as the Freetype author, he is > the expert on font features and technology. > > From the MR: > > > I equally object to any contribution being merged "because the author > knows what he's doing". > > I object to reviewers blocking contributions just because they have a > strong opinion on how things should be done. In this case, Jonas has > made 0 contributions to the MF code, so I don't think his concerns > should be overriding. > > If Jonas feels really strongly about how the kerning should be > handled, I invite him to teach himself the joys of Metafont and try > his hand at a follow-up MR. > > On Wed, May 25, 2022 at 12:01 AM Werner LEMBERG wrote: > > > > > > Folks, > > > > > > Jonas and I have an intense (and very exhausting) discussion where to > > add kerning data. I want to hear more opinions whether I should go > > 'route one' (which I prefer) or 'route two' (which Jonas prefers). > > > > Please have a look at MR 1368 > > > > https://gitlab.com/lilypond/lilypond/-/merge_requests/1368 > > > > and chime in. > > > > > > Werner > > > > > -- > Han-Wen Nienhuys - hanw...@gmail.com - http://www.xs4all.nl/~hanwen > > -- Luca Fascione
Re: RFC on MR 1368
There! Thanks Aaron! L On Wed, 25 May 2022, 15:34 Aaron Hill, wrote: > On 2022-05-25 1:31 am, Luca Fascione wrote: > > (*) is there really no way to cross reference/link a commit comment > > from > > gitlab? gah. > > The post's relative time (e.g. "9 hours ago") should itself be a > hyperlink with the appropriate named anchor: > > [1]: > https://gitlab.com/lilypond/lilypond/-/merge_requests/1368#note_959007386 > > > -- Aaron Hill >
Re: PATCHES - Countdown to May 26
Also Colin, if your machine can run the PyCharm editor, that editor comes with a really handy frontend to pip built right into it, which makes this procedure super easy to do Luca On Wed, May 25, 2022 at 9:56 PM Jean Abou Samra wrote: > Le 25/05/2022 à 21:29, Colin Campbell a écrit : > > > > > > - Original Message - > > From: Jean Abou Samra > > To: Colin Campbell , lilypond-devel < > lilypond-devel@gnu.org> > > Cc: Dan Eble > > Sent: Tue, 24 May 2022 23:10:39 -0600 (MDT) > > Subject: Re: PATCHES - Countdown to May 26 > > > > > > > > >That sounds like you tried to download the countdown.py script from the > > >GitLab UI and downloaded the HTML web page itself instead of the raw > > >Python script. > > > > We have a bingo! An embarrassing one, but definite progress. > > Now, the error is: > > Traceback (most recent call last): > > File "D:\LilyPond\countdown.py", line 8, in > > import requests > > ModuleNotFoundError: No module named 'requests' > > FWIW, I don't use this machine for much, so I had to download a Python > > 3.10.4 from the Microsoft Store, onto a tablet running Win 10. > > > > Yes, the script requires the requests package, which > is not part of the Python standard library. This > should probably do: > > python -m pip install --user requests > > Jean > > > > -- Luca Fascione
Re: Nested segno and volta repeats
What do you mean Thomas? When the sheet clearly indicates DC al Fine (Da Capo, from the beginning) why would it be normal to ignore such an explicit direction? I wasn't aware of \repeat segno, neat thing, I've always had to do it by hand with cadenza trickeries... L On Sun, 29 May 2022, 10:45 Thomas Morley, wrote: > Am So., 29. Mai 2022 um 09:56 Uhr schrieb Jean Abou Samra < > j...@abou-samra.fr>: > > > > Hi Dan, > > > > Is there any way to get this repeat structure with the recent > > repeat additions? This is from a question on the user list. > > > > ||: A:|| B|| > > Fine D.C. Al Fine > > > > -> A A B A > > From a musicians point of view: > I've learned not to repeat the final A here, though as a rule of thumb(!). > A composer could be expilzit by adding "con/senza repetizione" > > Afaict, "senza repetizione" is not supported by now. > > Cheers, > Harm > > > > > > My first thought was to do > > > > \version "2.23.10" > > > > m = > > \repeat segno 2 { > >\repeat volta 2 { > > a'1 > >} > >\volta 2 \fine > >b'1 > > } > > > > { \m } > > { \unfoldRepeats \m } > > > > > > That works about fine, except that the resulting > > structure with \unfoldRepeats is A A B A A and > > not A A B A. What I need seems to be a kind of > > \volta 2 \fine within the inner \repeat volta > > that would apply \volta with the outer \repeat > > segno. Did miss something like that? Should > > it be registered as a feature request? > > > > Best, > > Jean > > > > > >
Re: Nested segno and volta repeats
Oh yes. I was taught aaba as well, definitely. Sorry, somehow I heard you were saying you'd read it aab, you see L On Sun, 29 May 2022, 13:33 Thomas Morley, wrote: > Am So., 29. Mai 2022 um 13:25 Uhr schrieb Luca Fascione < > l.fasci...@gmail.com>: > > > > What do you mean Thomas? When the sheet clearly indicates DC al Fine (Da > Capo, from the beginning) why would it be normal to ignore such an explicit > direction? > > Maybe I was not clear enough. For > > ||: A:|| B|| > Fine D.C. Al Fine > > The "Fine" may be regarded as ambiguous. > By _convention_ above is played A A B A and not A A B A A. > But a composer could be expizit in what he wants. > That's all I wanted to say, hope it's clearer now. > > Cheers, > Harm > > > > > I wasn't aware of \repeat segno, neat thing, I've always had to do it by > hand with cadenza trickeries... > > > > L > > > > On Sun, 29 May 2022, 10:45 Thomas Morley, > wrote: > >> > >> Am So., 29. Mai 2022 um 09:56 Uhr schrieb Jean Abou Samra < > j...@abou-samra.fr>: > >> > > >> > Hi Dan, > >> > > >> > Is there any way to get this repeat structure with the recent > >> > repeat additions? This is from a question on the user list. > >> > > >> > ||: A:|| B|| > >> > Fine D.C. Al Fine > >> > > >> > -> A A B A > >> > >> From a musicians point of view: > >> I've learned not to repeat the final A here, though as a rule of > thumb(!). > >> A composer could be expilzit by adding "con/senza repetizione" > >> > >> Afaict, "senza repetizione" is not supported by now. > >> > >> Cheers, > >> Harm > >> > > >> > > >> > My first thought was to do > >> > > >> > \version "2.23.10" > >> > > >> > m = > >> > \repeat segno 2 { > >> >\repeat volta 2 { > >> > a'1 > >> >} > >> >\volta 2 \fine > >> >b'1 > >> > } > >> > > >> > { \m } > >> > { \unfoldRepeats \m } > >> > > >> > > >> > That works about fine, except that the resulting > >> > structure with \unfoldRepeats is A A B A A and > >> > not A A B A. What I need seems to be a kind of > >> > \volta 2 \fine within the inner \repeat volta > >> > that would apply \volta with the outer \repeat > >> > segno. Did miss something like that? Should > >> > it be registered as a feature request? > >> > > >> > Best, > >> > Jean > >> > > >> > > >> >
Re: Should we be touching goops?
If you look at source code implemented with one class system or the other, which one is clearer in its meaning for a user that is _moderately_ familiar with the ontology at hand? I feel this is the more important aspect here, and I'll share what I have observed when facing a similar choice, because the answer in the end was not what I had expected at first. I have worked a fair bit with systems that deal with geometric entities (points, planes, triangles, vectors, rays, lines, curves, that sort of stuff). In our field there are two schools of thought: the mathematicians (like me) want affine algebra class systems (points, vectors and normals are captured by different classes) and the software engineers want just vector algebras (everything is a vector) and "you can keep it in your head what's what". Inevitably, if you do this long enough, you end up working with both systems, and the reality is that in the overwhelming majority of cases only having vectors is fine. And the reason is that in reality affine algebra systems end up being more pedantic and in your way than it's worth to you. They do keep (certain) bugs away, but they cause so much extra typing and allocations that are very difficult to optimize away reliably, that the final balance is not very good. However what's unbelievably confusing, and a very fertile ground for difficult to find bugs, is mixing covariant vectors with contravariant ones (ie vectors and normals). We've had to fix many bugs of this kind, notwithstanding our efforts in careful and diligent naming conventions for variable and function names, to make sure we had code that looked like it was doing what it was actually doing. Folks with years of experience in the field doing this the whole day got caught in mistakes of this kind. And the real issues with these bugs is that they would be subtle because the source didn't help enough make it clear to the readers what was what. For me one lesson learned from this is: there is a cost to what you keep in your head while you're reading a piece of code, no matter how small. Your job as a designer of an ontology is to make sure that this cost is spent in a way that is most useful to the community working on the codebase. You want to maximize the usefulness of code reviews and from this comes the observation above: folks need to be _moderately_ versant in the ontology at hand to be able spot bugs, not deep experts. And there are several reasons for this: - if you have this, you enlarge the group that can usefully comment on a commit during review - in turn this means these people will participate in reviewing important code, which will help them learn how the system is put together, and in places where it actually matters - this helps them 2 ways: they learn what the system does (and where that happens), and picking up good patterns for writing their new code - re-applying these good patterns makes the whole codebase look more regular, which lowers cognitive overhead when you're reading code - and all the above creates serendipity and a very valuable self-sustaining loop (*) - on top of it, this frees up the deep experts to work on the harder problems: when faced with a challenge, finding a place to go and drawing the path to get there is where you want to spend your money. Once the path is traced, you'll see you have a good quality ontology from the fact that walking this new path is a walk in the park for everyone else. People reading a well thought out solution to a hard problem should go "of course!", not "my brain hurts...". (*) code quality goes up, folks write more relevant code because fewer bugs are introduces, bugs are caught early so the fix is not particularly involved (and the relevant code is still fresh in the author's head), folks feel like they're contributing in meaningful ways, more time is spent on new functionality instead of hammering at old material, which makes the people more satisfied and realized ... you get the idea Joel Spolski is often quoted as saying "make wrong code look wrong" [1]. I do not agree with the specifics of how he proposes to achieve it, and in all fairness he wrote that essay a very long time ago, and for a group that had a very specific set of constraints and problems. However the sentiment comes from the same place of what I was saying: making the barrier of entry real low for folks on a bug-spotting mission so they can do their thing is a _fantastic_ idea. Further to this, as I was saying before: not only do you want wrong code to look wrong, you also want code that does the same thing to look the same. And this is entirely because it makes it easier to read for humans, who are the ones that find the difficult bugs. Leaving it to the compiler to find bugs for you is table stakes, it'll only find the easy stuff anyways. That should be your assumed starting point, not your goal: your goal is attending to the community that does what's hard, so that you make it less hard. HTH, L [1] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-look-wrong -- Luca Fascione
Re: Should we be touching goops?
On Sat, Jun 4, 2022 at 12:47 PM David Kastrup wrote: > LilyPond uses precise arithmetic. > Thanks David, just out of curiosity, where's a reference to the specific implementation we're using? Further, besides the floating point math segment, does the rest feel like it's on target to you? Cheers L -- Luca Fascione
Re: Should we be touching goops?
Thanks Jean, I had inferred the Guile/Scheme part, nice to have a direct reference to the rational class. As to making a big deal: I can't speak to the impact, was just trying to contribute to the points being raised by various folks. As we want lilypond to backtrack on a decision as little as possible, it seems it's good that decision are carefully analyzed, so we keep the thing as a whole cleaner and easier to grow. I won't hide that I enjoy discussing design matters in Computer Science :-) L -- Luca Fascione
Re: Should we be touching goops?
On Sun, Jun 5, 2022 at 2:12 PM Jean Abou Samra wrote: > As David already said, the part of LilyPond we're discussing is using > rationals. Furthermore, (a + b) + c being close but not equal to > a + (b + c) for floats is not really an issue for most parts of LilyPond. > Yes, agreed on all points. I'd be surprised this would make a big practical difference. The difference is there, but at worst it's one least significant bit per operation when floats are involved. It's tiny in practice. > "a + (b + c) is close but not equal to "(a + b) + c" is different > from "a + (b + c)" works whereas "(a + b) + c" errors out (in Scheme) > or doesn't compile (in C++)". > Agreed. Although in this case you'd have a being a Moment and b and c being spans, so either association would be correct (and identical in value given they're rationals). But yes, yes of course, in general. > From what I've heard, GOOPS used to be inefficient at dispatching > virtual calls. This problem is apparently gone now. > Right. > Boxing and unboxing has a certain cost, but LilyPond is not optimized > to the point that thinking about it causes significant savings. The > order of the most worthy optimizations is more high-level. > Yes, that'd be my expectation too. I think we all agree that these are good things in > any software projects. The question is whether a > given change will contribute enough to these goals > to be worth it compared to its costs and downsides. > Absolutely. L -- Luca Fascione
Re: Should we be touching goops?
Oh yes absolutely, the growth is normally much slower than worse case unless the addends come from really weird-ass distributions, no doubt. Round to even helps a lot with that And indeed our numbers not coming from measurements will in practice only have low significant bits in a handful of specific patterns (and all divides by power of two have a lot lot of low significant zeroes, which further helps) (Do you guys have a sense in practice how rare "odd" divisor groupings are? It seems like anything that's not a triplet or maybe a quintuplet would be real rare, no?) L On Sun, 5 Jun 2022, 16:42 David Kastrup, wrote: > Luca Fascione writes: > > > On Sun, Jun 5, 2022 at 2:12 PM Jean Abou Samra > wrote: > > > >> As David already said, the part of LilyPond we're discussing is using > >> rationals. Furthermore, (a + b) + c being close but not equal to > >> a + (b + c) for floats is not really an issue for most parts of > LilyPond. > >> > > > > Yes, agreed on all points. I'd be surprised this would make a big > practical > > difference. > > The difference is there, but at worst it's one least significant bit per > > operation when floats are involved. > > It's tiny in practice. > > There tends to be "weak associativity" in that (((a+b)-b)+b)-b tends to > be the same as ((a+b)-b)+(b-b) in IEEE FP arithmetic using > "round-to-even" which helps a bit constraining progressive error > accumulation. > > But algebraically that isn't a lot of help, of course. > > -- > David Kastrup >
Re: Should we be touching goops?
On Sun, 5 Jun 2022, 17:39 David Kastrup, wrote: > Luca Fascione writes: > > > Oh yes absolutely, the growth is normally much slower than worse case > > unless the addends come from really weird-ass distributions, no doubt. > > Round to even helps a lot with that > > > > And indeed our numbers not coming from measurements will in practice only > > have low significant bits in a handful of specific patterns (and all > > divides by power of two have a lot lot of low significant zeroes, which > > further helps) > > There is no "low significance" for answering the question whether two > music events are simultaneous or not and could share a stem. > Sorry I meant that if you look at the bits of X/2^n t all the bits on the left will be zeros from a certain point on, because dividing by a power of 2 alters the exponent only, not the significand (assuming X is a smallish number). All this to imply rounding is not tricky > > (Do you guys have a sense in practice how rare "odd" divisor groupings > > are? It seems like anything that's not a triplet or maybe a > > quintuplet would be real rare, no?) > > Frequent enough that we would want to support it. Sextuplets are pretty > frequent (and differ in musical accent from identically timed triplets). > Yes of course, but sextuplet bits and triplet bits are the same (X/6 has the same significand bits as X/3, only the exponent is one lower). So if I understand right it's only rounding when you'd have strange recombinations of complicated fractions, which I'd imagine is very rare (if nothing else because it'd be hard to read for the musicians). That being said, of course rationals are just perfect for this application, I'm not suggesting we'd change anything I'm just musing/geeking out. L > -- > David Kastrup >
Re: Fonts missing in development environment
Hi Walter, here's a couple more direct pointers for you: The TeX fonts (Gyre) will be in a place like .../texmf-dist/opentype/public/tex-gyre/texgyreschola-regular.otf the '...' is because the stem of the path changes system to system (I'm assuming TexLive here). If you find texmf-dist, the path below should be close. The URW++ fonts instead are on github. There's of course several ways to get at them, this is a couple of shell script command that should get you going - download from website > cd /tmp/ > mkdir urw > cd urw > git clone --depth 1 git://git.ghostscript.com/urw-core35-fonts.git - pack into local archive (OTF only) > zip urw-core35-otf.zip ./urw-core35-fonts/{LICENSE,COPYING,*.otf} - install into per-user fonts subdirectory (this destination directory is suitable for use on a Mac, you'll want something else on ubuntu, maybe it's something like ~/share/fonts) > unzip urw-core35-otf.zip -d ~/Library/Fonts/ (the zip/unzip dance is to isolate only what you need, a judicious use of cp will do just as well of course) HTH Luca On Wed, Jul 27, 2022 at 8:08 AM Jean Abou Samra wrote: > > > > Le 26 juil. 2022 à 23:36, Dan Eble a écrit : > > > > On Jul 26, 2022, at 09:11, Walter Garcia-Fontes > wrote: > >> > >> I checked, and the following packages are installed in my system: > >> > >> tex-gyre > >> texlive-fonts-extra > >> fonts-texgyre > >> > >> How can I get these missing fonts? > > > > Walter, > > > > I'm using > > https://github.com/fedelibre/LilyDev/blob/master/docker/Dockerfile > > and I don't see warnings. > > > > What about the CI image? > > > https://gitlab.com/lilypond/lilypond/-/blob/master/docker/base/Dockerfile.ubuntu-18.04 > > The build passes with it everyday, so it’s guaranteed to work. > > Either you use Docker, or you look at the list of packages it installs and > mimick it. > > Best, > Jean > > -- Luca Fascione
Re: Fonts missing in development environment
Sorry Walter one more thing: you would then point configure to those font install locations explicitly ../configure \ --prefix=$HOME/usr \ --with-texgyre-dir=/texmf-dist/fonts/opentype/public/tex-gyre/ \ --with-urwotf-dir=/urw-core35-fonts/ replace '' with appropriate paths for your system Cheers L On Wed, Jul 27, 2022 at 8:29 AM Luca Fascione wrote: > Hi Walter, > here's a couple more direct pointers for you: > > The TeX fonts (Gyre) will be in a place like > > .../texmf-dist/opentype/public/tex-gyre/texgyreschola-regular.otf > > the '...' is because the stem of the path changes system to system (I'm > assuming TexLive here). > If you find texmf-dist, the path below should be close. > > > The URW++ fonts instead are on github. There's of course several ways to > get at them, > this is a couple of shell script command that should get you going >- download from website > > cd /tmp/ > > mkdir urw > > cd urw > > git clone --depth 1 git://git.ghostscript.com/urw-core35-fonts.git > >- pack into local archive (OTF only) > > zip urw-core35-otf.zip ./urw-core35-fonts/{LICENSE,COPYING,*.otf} > >- install into per-user fonts subdirectory (this destination directory > is suitable for use on a Mac, you'll want something else on ubuntu, maybe > it's something like ~/share/fonts) > > unzip urw-core35-otf.zip -d ~/Library/Fonts/ > > (the zip/unzip dance is to isolate only what you need, a judicious use of > cp will do just as well of course) > > HTH > Luca > > On Wed, Jul 27, 2022 at 8:08 AM Jean Abou Samra > wrote: > >> >> >> > Le 26 juil. 2022 à 23:36, Dan Eble a écrit : >> > >> > On Jul 26, 2022, at 09:11, Walter Garcia-Fontes >> wrote: >> >> >> >> I checked, and the following packages are installed in my system: >> >> >> >> tex-gyre >> >> texlive-fonts-extra >> >> fonts-texgyre >> >> >> >> How can I get these missing fonts? >> > >> > Walter, >> > >> > I'm using >> > https://github.com/fedelibre/LilyDev/blob/master/docker/Dockerfile >> > and I don't see warnings. >> >> >> >> What about the CI image? >> >> >> https://gitlab.com/lilypond/lilypond/-/blob/master/docker/base/Dockerfile.ubuntu-18.04 >> >> The build passes with it everyday, so it’s guaranteed to work. >> >> Either you use Docker, or you look at the list of packages it installs >> and mimick it. >> >> Best, >> Jean >> >> > > -- > Luca Fascione > > -- Luca Fascione
Re: Replacing fixcc.py with clang-format?
Side thought: if your CPP code is complex, indenting it helps readability a lot, here's a goofy example #if CONDITION # define AMACRO 6 # include "some/file.h" #else # if WIN32 #include "something/else.h" # elif MACOSX #include "the/darwin/version.h" # endif #endif I haven't seen clang-format formats that do this before, but on my code I'd have it leave CPP code alone if it didn't do this L On Tue, 6 Sep 2022, 20:56 Jonas Hahnfeld via Discussions on LilyPond development, wrote: > On Tue, 2022-09-06 at 18:46 +0200, Jean Abou Samra wrote: > > There's one thing I'd like to discuss now, for the reformatting round > > before we do the branching. > > (Yes, I didn't manage yet to propose the reformatting. I'd like to do > this after 2.23.13 is released. As a reminder, the idea is that we > introduce a bit of churn before branching, but then don't get into > trouble when backporting fixes to the branch.) > > > I would like to propose moving to clang-format as the canonical > > formatting tool and removing fixcc.py. This will decrease our > > maintenance burden. fixcc.py is more than 600 lines of code to > > maintain. > > In general I'm not opposed. However, please note that clang-format is > sometimes not stable over time, ie new versions can change formatting. > Most projects use git-clang-format to incrementally format only the > lines touched by commits. In LilyPond, the current practice is to > format all code files, so we will experience this to its fullest. I > don't know if that's going to be a problem, but I thought I'd mention > it. > > > These days, git blame supports the --ignore-rev option, which makes > > this less painful for future code historians. > > We can also have a .git-blame-ignore-revs in the repository. This > already works on GitHub, and there is an open issue to also support it > in GitLab: https://gitlab.com/gitlab-org/gitlab/-/issues/31423 > > > It will obviously still produce merge conflicts with existing WIP > branches. > > Usually branches can be "updated" by also running the (same) formatting > tool on the changes, and then either git manages to resolve things > itself or you can "port" the diff. > >
Re: MacOS release help
Besides whereas Frescobaldi is a Lilypond editor (thereby requires is and depends on it), Lilypond is not a Frescobaldi compiler, they're not dependent in the other direction. So the Lilypond installer shouldn't know about Frescobaldi Further, a package with no GUI elements doesn't bump me at all. The thing to do would be to insoect how similar tools are distributed (gcc, clang, web servers, ftp servers this kind of "service" stuff) and draw inspiration from those L On Tue, 18 Oct 2022, 17:59 Jean Abou Samra, wrote: > Le 18/10/2022 à 01:05, Carl Sorensen a écrit : > > In my opinion, we want to have > > a) an installer for Frescobaldi that could install LilyPond if > > desired, and > > b) an installer for LilyPond that could install Frescobaldi if desired. > > > Why would we need both? I think a) is the most sensible, because all > Frescobaldi users need LilyPond, while not all LilyPond users need > Frescobaldi. > > > Jean > >
Re: MacOS release help
I agree strongly with this, yes On Tue, 18 Oct 2022, 18:14 Jean Abou Samra, wrote: > Le 18/10/2022 à 08:12, Alex Harker a écrit : > > > > > >> On 18 Oct 2022, at 00:05, Carl Sorensen > >> wrote: > >> > >> IMO, what we most want is an app bundle that can be easily relocated > >> anywhere and that provides all of the binaries used by LilyPond. > >> Frescobaldi can be pointed at that app bundle to run LilyPond. > >> > >> I recognize that most apps have a GUI. But it's not strictly > >> necessary to have a GUI in the app bundle, if I understand correctly. > > > > I can’t be certain on whether the GUI is strictly necessary, because > > I’ve never considered the alternative, but an app bundle with no GUI > > is not something I’ve ever seen on MacOS, so I would not advise making > > one. > > > > However, the notion of a MacOS package on Mac is more general than an > > app bundle, and is simply a folder that has some metadata. The > > contents of the folder can be whatever you want, whereas an app bundle > > implies other things (like it will launch when double clicked and I > > think the bundle structure is expected to follow a given pattern). If > > what is required is just a single ’thing’ (as far as most users are > > concerned) then a package (but not a app) might be most appropriate. > > They can be used for anything where a bunch of structured resources > > should be kept together (some apps use them for documents, for > > instance, such as Logic Pro X - which allows it to keep a bunch of > > audio files inside something that looks like a ‘file'). > > > > The downside of the package approach would be usage entirely on the > > command line, although the barrier may be too small to be considered > > relevant. In the terminal packages act just folders and you can cd > > into them. In the finder you can also look inside them, but need to > > explicitly open them with a right-click contextual menu selection to > > ’Show Package Contents’. Most end users are unaware of this and see > > packages as if they were opaque files. I also don’t know how a package > > approach would operate if someone wanted to install to usr/local or > > similar in order to be able to run lilypond binaries without having to > > type the full location - I can take a look at that. > > > > I am starting to think that since Frescobaldi is the most complete > and beginner-friendly LilyPond environment out there, having a good > installing experience for Frescobaldi and LilyPond together would make > the installing experience for LilyPond without Frescobaldi much less > relevant. > > In particular, > > - as I said earlier, it would be great to have a .dmg for Frescobaldi 3.2, > > - Frescobaldi could gain an interface for easily installing various >LilyPond versions, and the first launch of Frescobaldi could just >open this interface. > >It seems that the old Frescobaldi 1 actually had this. It corresponds to >https://github.com/frescobaldi/frescobaldi/issues/313 > > > Jean > > >
Re: Potential LSR licensing violations
On Thu, Oct 20, 2022 at 7:57 AM Jean Abou Samra wrote: > This is not correct, since copyright doesn't > exist for something in the public domain (as opposed to something released > under a permissive license). So the file headers need not mention any > copyright at all, if the code is unmodified. > If I may propose a thought, I suspect it would probably be wisest they asserted their status of being in the public domain explicitly. If nothing else this will avoid future questions. I've seen folks use statements like "This file was originally written in 2013 by Ay B. Cee, and he hereby placed in the public domain". Disclaimer: Although I have been part of extensive discussions on this topic, I am not a lawyer, and my words do not constitute legal advice. L -- Luca Fascione
Re: Potential LSR licensing violations
On Thu, Oct 20, 2022 at 7:40 AM Jean Abou Samra wrote: > Le 20/10/2022 à 07:22, Werner LEMBERG a écrit : > > It would be a problem if we assigned copyright to the FSF. > As you mentioned below, we don't do this. > > > [*] Here comes the benefit of transferring the copyright to the FSF, > > which can handle such things without having to ask the original > > author AFAIK. LilyPond, however, inspite of being a GNU project, > > doesn't ask contributors for such a copyright transfer. > I would think it to be a more sustainable way forward to assign the copyright of contributions to the Lilypond project itself (or a similar entity, in charge of the project but not linked to the identity of one or more specific individuals). Some folks use a statement like "Copyright 2012, 2016-2019 The contributors of the Lilypond Project", for example. This has two kinds of advantages: one is that in instances like this where it becomes sensible to re-license some content, this can be done in a way that is transparent and doesn't necessitate tracking down specific individuals. (At the moment this list is where these discussion would happen, so the archives will provide a mean to track down when and how a given decision was made). The other advantage is that it provides better insulation for the individual contributing persons against non-benevolent external parties that might show up to assert rights they might think they have (rightfully or not). Classic example would be patent rights infringement. Although Lilypond is not a commercial project, nor it is a particularly big one (so it's unlikely to attract attention from unsavory characters), I do feel it would be a good ethical standard to apply on the part of the project managers and owners to try and insulate the contributors from potential unpleasantness. I repeat my disclaimer: Although I have been part of extensive discussions on this topic, I am not a lawyer, and my words do not constitute legal advice. Luca -- Luca Fascione
Re: Potential LSR licensing violations
On Thu, Oct 20, 2022 at 9:07 AM Jean Abou Samra wrote: > Anyway, this discussion is academical. It would have practical > relevance if we were creating the project today. > `git shortlog -s | wc -l` tells that there have been > 236 contributors to the project. We cannot ask each of > them to assign copyright to the LilyPond foundation even > if we were to create it. > > To a point, though: first off, if you start change, you embark on a path that will eventually take you to a better place. Secondarily, although there are 236 contributors, how many have contributed code that is still alive (and needed)? I'm saying that I don't agree with your statement that we cannot ask 236 people to assign copyright. It seems to me it's far from impossible to send a couple hundred emails, filter the responses and blast out a code update. Of the X remaining (80?) we can analyze the impact of the corresponding code and proceed with a decision (a part of the project is in a difficult spot, code is rewritten, functionality turns out to be buggy or dead, many scenarios are possible). Besides, if a foundation in itself is needed, it doesn't seem impossible to get one going, is it? One thing that seems certain to me is that doing nothing guarantees there will be no change. L -- Luca Fascione
Re: Potential LSR licensing violations
Or you remove it, or you reimplement it I think having GPL content in the lsr is the least desirable in the long term, because either folks using it won't notice, or they might find themselves unable or unwilling to use GPL as part of their content. I'm not clear what it means to have GPL source in a sheet of which you have the pdf, it would seem to imply you'd have access to the whole Lilypond source for it, maybe, if you asked for it. A publisher might be unwilling to accept such terms, maybe L On Thu, 20 Oct 2022, 12:45 Jean Abou Samra, wrote: > > And there aren't many solutions. Either we get permission from > the copyright owners to release it to the public domain, or > we release it under the GPL instead of the public domain. > > > Jean > >
Re: Potential LSR licensing violations
Hum. It seems to me this is greyer that what you say. gcc transforms program.c into a.out Your access to a.out gives you rights to access program.c s/gcc/lilypond/; s/program.c/score.ly/; s/a.out/out.pdf/; I see very little difference. More importantly, what would lawyers and judges from various legislative systems think about this? Our opinion counts up to a point (which is very insignificant). I suspect it's not as clear cut as you make it. I am not a lawyer either. This message is not legal advice L On Thu, 20 Oct 2022, 13:47 Jean Abou Samra, wrote: > > Le 20/10/2022 12:59 CEST, Luca Fascione a écrit : > > > > > > Or you remove it, or you reimplement it > > > Well yes. > > > > I think having GPL content in the lsr is the least desirable in the long > term, because either folks using it won't notice, or they might find > themselves unable or unwilling to use GPL as part of their content. > > > Perhaps. > > > > I'm not clear what it means to have GPL source in a sheet of which you > have the pdf, it would seem to imply you'd have access to the whole > Lilypond source for it, maybe, if you asked for it. A publisher might be > unwilling to accept such terms, maybe > > > No; the GPL puts no restrictions on the output of the program, > only on the program itself and modified versions (and compiled > versions of it, but I really don't think compiling to PDF would > count, because the purpose of a PDF is to be viewed, not to be > executed like an executable produced by a C compiler). Cf. > > https://www.gnu.org/licenses/gpl-faq.html#WhatCaseIsOutputGPL > > LilyPond does embed a tagline, but it's so short you'd have trouble > claiming copyright on its text. The only thing in the output PDF > that could be considered copyrighted from LilyPond is the glyphs > from the Emmentaler font, and this is covered in the LICENSE file: > > * The files under mf/ form a font, and this font is dual-licensed > under the GPL+Font exception and the SIL Open Font License (OFL). > A copy of the OFL is in the file LICENSE.OFL. > > The font exception for the GPL stipulates the following exception: > > If you create a document which uses fonts included in LilyPond, > and embed this font or unaltered portions of this font into the > document, then this font does not by itself cause the resulting > document to be covered by the GNU General Public License. This > exception does not however invalidate any other reasons why the > document might be covered by the GNU General Public License. > If you modify one or more of the fonts, you may extend this > exception to your version of the fonts but you are not obliged > to do so. If you do not wish to do so, delete this exception > statement from your version. > > > In other words, everything is done properly so that an output PDF > from LilyPond is not covered by the GPL. > > However, if you use the -dembed-source-code option to embed your > source in the PDF, then the source remains under whatever license > you distribute it, independently from the graphical content of the > PDF. If it's adapted from source code found in LilyPond, it must be > GPL. > > IANAL (I should have said this on all my previous messages) >
Re: Potential LSR licensing violations
To be clear: the potential issue I see is when the score or some of the headers it includes are GPL licensed, of course. Now of course the boundary between 'score' and 'lilypond plugin' in our case is particularly blurry, but still, it seems the question is germane to the discussion at hand. L On Thu, Oct 20, 2022 at 1:56 PM Luca Fascione wrote: > Hum. It seems to me this is greyer that what you say. > > gcc transforms program.c into a.out > > Your access to a.out gives you rights to access program.c > > s/gcc/lilypond/; s/program.c/score.ly/; s/a.out/out.pdf/; > > I see very little difference. > > More importantly, what would lawyers and judges from various legislative > systems think about this? Our opinion counts up to a point (which is very > insignificant). > > I suspect it's not as clear cut as you make it. > > I am not a lawyer either. This message is not legal advice > > L > -- Luca Fascione
Re: Potential LSR licensing violations
On Thu, Oct 20, 2022 at 1:47 PM Jean Abou Samra wrote: > > Le 20/10/2022 12:59 CEST, Luca Fascione a écrit : > > I think having GPL content in the lsr is the least desirable in the long > term, because either folks using it won't notice, or they might find > themselves unable or unwilling to use GPL as part of their content. > This came out too strong. I meant "there is a possibility/risk that folks might" L -- Luca Fascione
Re: Potential LSR licensing violations
On Fri, Oct 21, 2022 at 1:00 PM Jean Abou Samra wrote: > Le 20/10/2022 à 15:46, Luca Fascione a écrit : > > To be clear: the potential issue I see is when the score or some of > > the headers it includes are GPL licensed, of course. > > Now of course the boundary between 'score' and 'lilypond plugin' in > > our case is particularly blurry, but still, it seems the > > question is germane to the discussion at hand. > > IMHO, such an interpretation by a court is unlikely. The truth, > as with a number of legal things, is that we will never know for > sure, because (with probability close to 1) no court will ever have > to settle such a case. > Well, the TeX people say that if the style file is GPL, the entire document is GPL, look: https://opensource.stackexchange.com/questions/2735/gpl-licensed-latex-template-implications-for-resulting-work So I think this constitutes evidence that actually that interpretation is the accepted one. As to whether this will be in court, I agree this is not too likely. All the same, if it did happen, I would not want to be a cause for well-meaning folks to be dragged into displeasing circumstances. I feel it was absolutely brilliant how Jan resolved the issue, showing that getting new permissions may actually not be that hard in practice and after all. Luca -- Luca Fascione
Re: procedure to check equality of list-elements
Note the detail that + a b c and eq? a b c don't do the exact same thing: + a b c is equivalent to (a + b) + c eq? a b c is equivalent to (a == b) && (b == c) The list form has short circuiting if I remember right (eq? bails out on the first false it finds), but I don't remember how evaluation works for the arguments, in terms of what side effects are meant to observable when early-out happens L >
Re: procedure to check equality of list-elements
Good to know thanks Jean! L >
Re: Prefer luatex for documentation
Luatex is always available with modern tex distros (say at least 5 yrs probably more). In fact pdftex _is_ luatex... I feel texlive is a stable enough bet for people... L On Sat, 19 Nov 2022, 22:15 Jonas Hahnfeld via Discussions on LilyPond development, wrote: > On Sat, 2022-11-19 at 10:19 +, Werner LEMBERG wrote: > > In https://gitlab.com/lilypond/lilypond/-/merge_requests/1714 I > > suggest that we prefer luatex for building the documentation. What > > do people think? > > What I'm missing here is the bigger picture: Are we going to continue > adding support and switching between TeX engines one after the other? > If we prefer LuaTeX, should we stop looking for XeTeX? (As mentioned in > the merge request, we want pdfTeX because it's fast and included by > default in Ubuntu's texlive-bin / the Docker images). > > > The main advantage of using luatex is complete microtype support; > > this was activated recently in `texinfo.tex`, and XeTeX doesn't > > support it in its entirety, lacking font expansion. > > But pdfTeX does support font expansion, right? Reading through the > 'microtype' package documentation, it reads as if all of this comes > from pdfTeX... > > > The microtype feature yields (a) less underfull lines (i.e., less > > lines with overly large inter-word spaces), (b) less hyphenation, and > > (c) a better 'grayness' of the pages, thus increasing legibility. > > While (c) is not a big issue with technical documentation, (a) has > > quite an impact IMHO, and (b) is valuable since it is always a good > > thing to avoid hyphenation with keywords and the like because there > > might be misunderstandings whether the hyphen is part of the keyword > > or due to the line break. > > Do you have an example for this? As I wrote on the merge request, I've > been looking through the PDFs you provided, and it's really hard to > find places where this actually makes a difference... > > So in general I have the feeling that this doesn't bring us much, but > just keeps adding more checks to our configure and more choices / > configurations to test on a somewhat regular basis. I'm not really in > favor. > > Jonas > >
Re: Prefer luatex for documentation
I would have sworn one of the TUG updates on the state of luatex stated that pdfTeX had been frozen for several years and that at some point it was decided that "from then on" it would be implemented in terms of luatex instead. I must admit I never fully understood how this was mechanically possible, in that pdftex has built-ins that are not available in luatex... Obviously I can't find that document any longer. I'll investigate, now I'm curious. In any event, the point was more that luatex has been stable and very usable for several years now and that it's readily available in moderately recent TeX distribution. Sorry for spreading misinformation L On Sun, Nov 20, 2022 at 6:28 AM Werner LEMBERG wrote: > > > Luatex is always available with modern tex distros (say at least 5 > > yrs probably more). In fact pdftex _is_ luatex... > > ??? Definitely not. > > > -- Luca Fascione
Re: Prefer luatex for documentation, Re: Prefer luatex for documentation
I must say I don't understand this discussion. If we as the developers of Lilypond recommend users move onto lilypond-next, shouldn't we also keep current with versions of software around us? After all, we upgrade platforms, compilers, python interpreters, guile interpreters and all that, what exactly is it that makes the TeX engine any different? LuaTeX is the "current" engine (actually it's locked and LuaTeX-HB is the next version up), pdfTeX was frozen several years ago (like 10 or something). Also, I find it disheartening and a big part of why I myself lost a fair bit of steam in contributing, how the criticism towards Werner's work comes across as harsh and short-tempered. He's gone ahead and provided working source code for a result that is clearly an improvement and a step forward (small for some, more material for others, including me), what exactly is the reason for rejecting this? I read arguments saying that it provides some "real work" to support whatever many versions of TeX. First off, it seems to me it's mostly on Werner's shoulders where this work would fall anyways, besides, if that was a real concern, why can't we just constructively help him drop the _oldest_ supported engine, so that we actually keep moving forward? This would have two benefits: one, of some importance, to reduce dependencies on old codebases and products, which will inevitably go stale over time, or expose you to risks of missing features, the other, more important in my view, to gratify and support the work of a contributor, which will feel valued as part of this group. I thought for a while whether I should write to the list, because I lack evidence that these posts of mine provide much value. All the same, seeing useful work shot down like this, without a single voice in support of it felt out of place to me. I'd think as a community of developers trying to work towards a common goal, we should all support each other's work and initiatives, and propose better reasons for not changing than "friction" and "I can't bear the idea of yet another upgrade, I'm too tired for that". Lastly, I must say I find it unbelievably surprising that a group of developers writing software for (musical) typography shows this level of complete lack of interest for (literary) typography. It's really jarring in my eyes. L On Mon, Nov 21, 2022 at 7:00 AM Werner LEMBERG wrote: > > >> There are a bunch of LaTeX packages that only work with a specific > >> TeX engine, and which need special input code for that. For > >> example, `fontspec` (with its excellent OpenType support) only > >> works with XeTeX and luatex. Or think of 'lyluatex', which > >> obviously needs luatex. > > > > Yes, absolutely. This is exactly why I am surprised that some people > > set global environment variables that select a TeX engine "to always > > use the same". What tools do they have an effect on? > > I forgot to mention that *all* TeX flavours understand 'normal' TeX > and LaTeX code that was written for the original TeX incarnation. If > you only work with such code and don't have to or don't want to deal > with extensions like OpenType font handling, it often makes sense to > replace `pdftex` with `xetex` or `luatex` since the latter two > programs usually produce *much* smaller PDF files. > > > Werner > > -- Luca Fascione
Re: Prefer luatex for documentation
On Mon, 21 Nov 2022, 13:34 Jean Abou Samra, wrote: > > build problems are fixed by developers, not users, sometimes very > painfully, and using time that they could spend on other tasks. > If Werner's change breaks the build, surely he'll be the first one to argue it's on him to fix it (possibly with help to learn how tonfix whatever he doesn't know how to fix), no? Besides, you can always spend time on other tasks, but sometimes you do fun things and sometimes you do chores, that's just life, isn't it... Entropy is a problem, especially in software, and this kind of upgrade and tidy-up activity is more on the chore side. This is why we all tend to do this when there is also some more material benefit, to sweeten the boredom of the chore side of it... And to what Wols said: I agree completely, it should be _all_ about the users, coding is an act of service, not self-gratification. The joy comes from making (other) humans happy, not compilers... L >
Re: Prefer luatex for documentation
Sorry, luatex is like 10yrs old, what's the need for xetex again? Maybe I could justify pdftex (I really don't quite see it, but maybe) but xetex seems just arbitrary... Or do you mean for a transition period? What's the oldest system that this Lilypond would be used on? What's the youngest texlive that will run on that system? That's your tex distro of reference. L On Mon, 21 Nov 2022, 13:43 Jean Abou Samra, wrote: > > > Le 21 nov. 2022 à 13:36, Werner LEMBERG a écrit : > > > > To clarify what Jean wants if he mentions 'maintenance burden': We are > > talking about a potential change checking for all three TeX variants > > in the `configure` script, or rather, what TeX versions should be > > tested against. > > > > The patch for such a decision is trivial. It doesn't mean that it > > would no longer be possible to build the documentation for a given TeX > > flavour, it is just that `xetex` (or `pdftex`) would be no longer in > > the toolchain for CI and building the final version of the > > documentation. > > > > Well, things that don’t get tested have a tendency to get broken over > time, but indeed it wouldn’t mean preventing from compiling with XeTeX in > the immediate future. > > It would mean that once XeTeX does break, we don’t get “hello, I am a new > contributor / a distro packager, and your doc build is failing for me with > this obscure error: …”. > > Jean > > > > >
Re: Prefer luatex for documentation
On Mon, Nov 21, 2022 at 2:05 PM Jean Abou Samra wrote: > > Le 21 nov. 2022 à 13:46, Luca Fascione a écrit : > > Sorry, luatex is like 10yrs old, what's the need for xetex again? > Are you asking this to me (judging from To:/Cc:)? I don’t see one. > No, I was asking the group in general. Editing emails on my phone is... obnoxious. > > Maybe I could justify pdftex (I really don't quite see it, but maybe) > As said on the MR, it is faster and makes for lighter CI Docker images. > Oh luatex is definitely slower, but still, is it slower enough that it makes a real difference? I'd imagine for package maintainer folks and developers it's no matter, it might slow down doc folks if the build the tex stuff a lot, but I'd like to hear from them anyways before jumping to that conclusion. -- Luca Fascione
Re: Prefer luatex for documentation
On Mon, Nov 21, 2022 at 2:23 PM Werner LEMBERG wrote: > > > Sorry, luatex is like 10yrs old, what's the need for xetex again? > > Some issues that potentially speak against using luatex: > > * LuaTeX's OpenType support is still in flux and sometimes buggy. The > future is probably luatex-hb, using the 'HarfBuzz' library for > OpenType font handling. > Yes, that seems to be their opinion as well. > * The main target of LuaTeX is not LaTeX but ConTeXt, which means that > some features (speak: extensions) are probably not as much tested. > As one data point, the stuff I build myself is substantially more complex than the luatex documentation (lots of maths), minus of course the truckloads of inlined minipdf's the docs have because of including the music (which I believe can be done more tightly in luatex, like one would use pigmentize, but we digress). Although they do target ConTeXt, I can't say that LaTeX runs poorly on luatex, actually it seems to me it runs just fine. > * AFAIK, `luatex` is *much* slower than `pdftex`. > I'd say that for a 80-100page document it's maybe somewhere between 50% and twice as slow. Still it's a several seconds build that becomes more several seconds. I'm not sure it crosses important workflow thresholds [1], it certainly didn't in my own use. [1] http://enderton.org/eric/pub/workflow.pdf > > Maybe I could justify pdftex (I really don't quite see it, but > > maybe) but xetex seems just arbitrary... Or do you mean for a > > transition period? > > We changed to XeTeX because pdfTeX produces invalid PDF outlines if > non-ASCII characters are involved. This is not a problem with pdfTeX > itself but due to lack of support in `texinfo.tex`. At that time of > the switch, LuaTeX support wasn't ready – there was a `luatex` bug > that stalled further work for two months or so (until someone > suggested a workaround, see MR !1740). > Ok but then are you saying pdfTeX is not usable today, and it's either XeTeX or LuaTeX today? > > What's the oldest system that this Lilypond would be used on? > > What's the youngest texlive that will run on that system? That's > > your tex distro of reference. > > TeXLive runs on virtually *all* systems, even old ones based on the > i386 chips. This means there is no useful answer, AFAICS. > Au contraire: it means you can ask anybody that builds our docs to upgrade their tex distro to a new one, and they'll have a working LuaTeX "no matter what system they use othrwise". Which seems to me it's a very useful answer (it removes one constraint I guess). -- Luca Fascione
Re: Prefer luatex for documentation
On Mon, Nov 21, 2022 at 2:06 PM Werner LEMBERG wrote: > The thing is: Something might happen if I'm not available, for > whatever reasons. It definitely *is* a high maintenance cost if a > single developer is responsible... > But that's true of any one feature: I build you a nice template library to do in and you find a bug while I'm . That can always happen, we know this, we cope with it. How's the TeX/texinfo build any different? > > And to what Wols said: I agree completely, it should be _all_ about > > the users, coding is an act of service, not self-gratification. The > > joy comes from making (other) humans happy, not compilers... > > I disagree, it is *not* all about the users. There must be a balance > between what the developers want to do or can do, and what the users > expect. Promising stuff to the user, which later on fails due to the > lack of developer resources, is bad. > Forgive me Werner, but it appears to me your own closing point is actually precisely about the users, isn't it? If I may paraphrase what I hear you say: "The users are being disserviced (is that a word?) thereby this is bad". I agree, of course. This IS bad. The users are being provided with an unsatisfactory experience, and that is undesirable. And it seems to me you're concerned in the same way as I am: our role here is in service of a community that engraves music sheets. When we stop these people, impede their progress, make their planning invalid (possibly because we're not delivering to what we promised), we are behaving poorly to them. I share that concern, and I think it's a very ethically sound concern to have, it's an important thing to worry about, I'd say. I'd characterize it as a user-focused concern, no? L -- Luca Fascione
Re: pygment regex question
well -3 seems to be matching it, (say in a-3, I'm aware this is a fingering/articulation mark, not a duration). It appears to be an attempt to match a signed integer followed by zero or more dots. It sucks that pygments regexes are context free, though. This should be using regex capturing and be more like `[a-g]((?:\d+|\\longa|\\breve)\.*)` or better yet be more like `[a-g]((?:2|4|8|16|32|64|128|\\longa|\\breve)\.*)`, it's not like a5 is a valid token... L On Fri, Nov 25, 2022 at 2:24 PM Werner LEMBERG wrote: > > Looking into `lilypond.py` (in `pygments.zip`), I wonder what exactly > this regex does: > > ``` > # Integer, or duration with optional augmentation dots. We have no > # way to distinguish these, so we highlight them all as numbers. > (r"-?(\d+|\\longa|\\breve)\.*", Token.Number), > ``` > > What is `-?` good for? Note that at the time this regex is active, > numbers are taken care of. > > > Werner > > -- Luca Fascione
Re: pygment regex question
It's not a validation, it's an anchor, it avoids it matching other numbers. That's why the capture. If pygments was better designed it'll let you do semi-context-sensitive stuff like this, so you could say "numbers, but only if the follow a note name" -> durations L On Fri, 25 Nov 2022, 17:52 Werner LEMBERG, wrote: > > > well -3 seems to be matching it, (say in a-3, I'm aware this is a > > fingering/articulation mark, not a duration). It appears to be an > > attempt to match a signed integer followed by zero or more dots. > > The thing is that the regular expressions match both LilyPond and > Scheme syntax. > > > It sucks that pygments regexes are context free, though. This > > should be using regex capturing and be more like > > `[a-g]((?:\d+|\\longa|\\breve)\.*)` or better yet be more like > > `[a-g]((?:2|4|8|16|32|64|128|\\longa|\\breve)\.*)`, it's not like a5 > > is a valid token... > > I don't think stuff like `ag` is a problem – it's not the job of > pygments to validate LilyPond input. > > > Werner >
Re: pygment regex question,Re: pygment regex question
I agree this would be a better regex, yes. (You still have that double re: thing in the subject going on, Werner) L On Fri, 25 Nov 2022, 17:55 Werner LEMBERG, wrote: > >> Note that at the time this regex is active, numbers are taken care > >> of. > > > > Floats are, integers not. > > OK, but shouldn't this be rather > > ``` > (-?\d+|\\longa|\\breve)\.* > ``` > > then? > > > Werner > >
Re: pygment regex question
On Fri, 25 Nov 2022, 18:11 Jean Abou Samra, wrote: > What makes you think Pygments can’t do this? You can do > > (?<=\w+)\d+ > Nothing but my not remembering lookaheads/lookbehinds, which I may argue aren't very commom constructs. In fact aside from PERL I'm not even sure what precedent they have (no python doesn't count). Besides, this has nothing to do with pygments, this is the regex matching engine that does its thing, pygments just gratefully receives the benefit. > and things like that. You could also arrange so that the regex parsing a > pitch leaves you in a state of the lexer where something special will > happen for \d+ This does sound like pygments code. Interesting, I wasn't aware you could mess with the state of the lexer to that depth. However, durations don’t always follow a pitch, as in > > \tuplet 3/2 8. { … } > > which is the reason why we don’t want to do that. > Does Lilypond's parser even know that's a duration? Isn't that just a bare string that \tuplet internally interprets as a duration? When implementing this kind of simplistic syntax highlighting (like, ones not assisted by being aware of the semantics of the language, like you'd have in Visual Studio or Qt Creator, say) there's always this problem of how much of the common libraries you reimplement by hand, I'm not sure how Frescobaldi does its thing, for example, a lot of it seems quite magic to me (or the result of a huge labour of love... I mean, that program is just brilliant). Anyways whatever Frescobaldi does, I wonder if we could mimic for Pygments... L
Re: Color variables/symbols
Why the difference in value? Red is 10% off, green more like 30%? What's up with that? L On Sat, 26 Nov 2022, 19:21 Werner LEMBERG, wrote: > > >>> lukas@Aquarium:~/git/lilypond/scm(master)$ git grep darkred > >>> color.scm:(darkred 0.54509803921568623 0 0) > >>> output-lib.scm:(define-public darkred '(0.5 0.0 0.0)) > >>> > >>> lukas@Aquarium:~/git/lilypond/scm(master)$ git grep darkgreen > >>> color.scm:(darkgreen 0 0.39215686274509803 0) > >>> output-lib.scm:(define-public darkgreen '(0.0 0.5 0.0)) > >>> > >>> @everyone: Is this as it should be? > >> Yes. The non-public entries are only accessible with `(x11-color)` > >> or `css-color`. > > > > I'm not sure that's true. What made me stumble is the following > > behaviour: > > > > \version "2.23.10" > > > > { > > \override NoteHead.color = darkgreen > > c'4 > > \override NoteHead.color = #darkgreen > > c'4 > > } > > > > creating: > > > > I'm not sure this is ideal. > > Oh, oh. I agree. However, explaining and fixing this is beyond my > Scheme abilities... > > >Werner >
Re: Color variables/symbols
Indeed, you even had said before. Thanks Werner L On Sat, 26 Nov 2022, 21:55 Werner LEMBERG, wrote: > > > Why the difference in value? Red is 10% off, green more like 30%? > > Different standards (terminal colors vs. X11/CSS): identical names but > different colours. > > >Werner >