This is nothing to do with GPT4, or guitars, so might be considered
off-topic :) , but I'm aware of this paper due to Percival et al., which
addresses sight-reading exercise generation:
https://www.researchgate.net/publication/235925970_Generating_Targeted_Rhythmic_Exercises_for_Music_Students_with_Constraint_Satisfaction_Programming
On 30/03/2023 05:57, Mike Blackstock wrote:
re. "Anybody else playing with GPT4 and Lilypond?"
I'm very much interestedin exploring its use to generate graded
sight-reading material.
My own instrument is classical guitar and we're not the best
sight-readers[1]... it would be
nice to have daily sight-reading exercises generated for practice,
with midi. I could donate
the use of a QEMU/KVM server instance for working on a project of that
sort.
[1] Guitarist John Williams:
"Another thing I’ve noticed in master classes, is that players will
come on and play the most
difficult solo works from memory, and yet if you give them a part to
play in one of the easier
Haydn String Quartets, as I often do, they’re lost in no time, and
have a very poor sense of
ensemble or timing. Guitarists are among the worst sight-readers I’ve
come across.
Julian Bream and I are both dead average sight-readers by orchestral
standards,
but among guitarists, we are [considered] outstanding! "
https://guitarteacher.com.au/interview/john-williams-interview/
On Wed, 29 Mar 2023 at 18:44, Saul Tobin <saul.james.to...@gmail.com>
wrote:
I've seen some examples of other people succeeding in getting
ChatGPT with GPT4 to compose simple music in other text based
music formats. I've had limited success getting it to output
Lilypond code. It is able to correctly structure the code with a
score block, nested contexts, and appropriately named variables,
and bar checks at the end of each measure. It seems to struggle to
create rhythms that fit within the time signature beyond extremely
simple cases. It also seems to struggle a lot to understand what
octave pitches will be in when using relative mode.
It also seems to have a lot of trouble keeping track of the
relationship between notes entered in different simultaneous
expressions. Just asking it to repeat back which notes appear in
each voice on each beat, GPT4 frequently gives stubbornly
incorrect answers about the music it generated. This makes it very
difficult to improve its output by giving feedback.
I'm curious whether anybody else has tried playing with this. I
have to imagine that GPT4 has the potential to produce higher
quality Lilypond output, given some of the other impressive things
it can do. Perhaps it needs to be provided with a large volume of
musical repertoire in Lilypond format.
--
https://blackstock.media