Re: \hideNotes and MIDI Note_performer

2013-01-07 Thread Johannes Rohrer
Adam Spiers  adamspiers.org> writes:

> I've noticed that MIDI generation doesn't honour transparent notes,
> e.g. in
> 
>   f8( \hideNotes \grace {  c16 \glissando } \unHideNotes f8)
> 
> a NoteOn event is generated for the c16.  Hopefully I should be able
> to address this if someone gives me a few pointers.  My first guess
> was to tweak Note_performer::process_music() by adding something like:
> 
> if (to_boolean(n->get_property ("transparent")))
> break;

It won't be that simple I'm afraid. "transparent" is not an event property that 
you can read here, but a grob property. \hideNotes presets it for various 
graphical objects (Dots, NoteHead, Stem, Beam, Accidental, Rest, TabNoteHead), 
using override commands like this:

  \override NoteHead.transparent = ##t

But during MIDI generation, no such objects are ever created.

>From a note event, Note_performer creates an AudioNote object, a type of 
AudioElement, and those are formally analogue to grobs. It would be nice if you 
could override a property again, like this:

  \override AudioNote.mute = ##t

Unfortunately, the AudioElement class, which is rather primitive compared to 
Grob, does not currently provide any scheme property interface. (Changing this 
is an item on my imaginary LilyPond project list, but I am still learning 
myself.)

For now, working around the problem with tags might be more productive?

Best regards,

Johannes


___
lilypond-user mailing list
lilypond-user@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-user


Re: \hideNotes and MIDI Note_performer

2013-01-07 Thread Johannes Rohrer
Adam Spiers  adamspiers.org> writes:
> On Mon, Jan 7, 2013 at 9:30 AM, Johannes Rohrer  johannesrohrer.de>
wrote:
>> Adam Spiers  adamspiers.org> writes:
>>
>>> I've noticed that MIDI generation doesn't honour transparent notes,
>>> e.g. in
>>>
>>>   f8( \hideNotes \grace {  c16 \glissando } \unHideNotes f8)
>>>
>>> a NoteOn event is generated for the c16.
...
>> For now, working around the problem with tags might be more productive?
> 
> OK, but I'm not sure how tags could be used for this.  Are they
> available to Note_performer, or did you mean something else?

Something else; I was thinking of a simple .ly-level clutch, like
the following pattern, written out for your example from above:

---

mymusic = \relative f' {
  f8( 
\tag #'layoutonly {
  \hideNotes \grace { c16 \glissando } \unHideNotes
}
f8)
}

\score {
  \mymusic
  \layout { }
}

\score {
  \removeWithTag #'layoutonly \mymusic
  \midi { }
}

---

Not ideal, but gets the job done. You could still encapsulate
the \tag+\hideNotes/\unHideNotes-part into a music function. And
in real-life scores you often need a separate MIDI \score block
anyway, e.g. for repeat unfolding.

Best regards,

Johannes


___
lilypond-user mailing list
lilypond-user@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-user


Re: Creating LilyPond Object Models

2015-04-23 Thread Johannes Rohrer
* 2015-04-23 01:29 +0200:
> Translators are program elements that convert music expressions to output.
>  Engravers are translators that create printed output.  Performers are
> translators that create midi output.
> 
> Translators examine the music expressions that are contained in the
> context, and create output elements.  For the case of engravers (which
> create graphical output), the output elements are grobs.  The grobs have
> properties that are used to create their appearance on the page.

This is very simplified. Translators do not operate on music expressions
directly, and music expressions are not themselves contained in
contexts. This level of understanding may get you relatively far as a
user, but is not even sufficient for reading the Internals Reference.

It has been a while since I last tried to wrap my head around this, but
from memory:

Program elements called Iterators turn music expressions into a
time-ordered stream of Events sorted into contexts. The different types
of Events produced at this stage are listed here:



Events are categorized in a hierarchy of Event Classes. These are listed
here:



Translators selectively (based on assigned Event Classes) "accept"
Events from the stream that were sorted into "their" context and process
them. Typically, they will produce "output objects" (for Engravers:
layout objects, aka Grobs; for Performers: Audio_items) and "announce"
these. Other Translators can "acknowledge" certain types of announced
objects to process them further.

This is mostly undocumented I believe, although there are some
snippets in the contributor's guide



and some helpful scattered mailing list posts.


Best regards,

Johannes

___
lilypond-user mailing list
lilypond-user@gnu.org
https://lists.gnu.org/mailman/listinfo/lilypond-user