I have been reading Jeff Hawkins' _1000 Brains_ which is roughly *his*
take on AI from the perspective of the Neuroscience *he* has been doing
for a few decades, including building models of the neocortex.
What struck me strongly was how much *I* expect anything I'd want to
call artificial *co
Off the top of my head, I can see 3 ways to get music out of the current chat
interfaces:
1) algorithmic music - E.g. C programs like this:
#include
int main(int t) {for (t=0;;t++) putcharint)(t/12)>>8&t) - (t<<4)) &
(((int)(t/6)>>6&t) + (t<<2)));}
The code I've gotten out of ChatGPT has
What we really need to try is to have 2 APIs open, one to Bard and one to
ChatGPT and *conduct* them to play together ... or maybe 4 pipes open so that
Bard's output is part of the prompt, dovetailed with the conductor, for GPT and
vice versa. Shirley someone out there is submitting Bard output
Steve, here's what I would query you wrt this thread on communication. I
hit the polite poly first:
Summarize two science fiction works that explore the theme of faceted
ontologies that create boundary spanning objects that allow communication
between wildly different life forms
1.
"Emb
I am certain that AIs can generate music, probably in the style of famous
composers. I just have not seen such examples in the frenzy of "guess what
ChatAIs just did."
My point focused on the possibility of "collaborative creation" ala pair
programmers, jazz musicians, improv comedians, etc. W
I'm pretty sure I posted this awhile back: https://www.riffusion.com/ But I am
losing my mind. So maybe not.
On 4/6/23 10:19, Prof David West wrote:
I am certain that AIs can generate music, probably in the style of famous composers. I
just have not seen such examples in the frenzy of "guess w