Then there's
https://www.theguardian.com/books/article/2024/sep/09/the-big-idea-how-the-protege-effect-can-help-you-learn-almost-anything
which
is about the benefits of teaching anyone, but the author chose to teach a
chatbot. Irene countered with
https://en.wikipedia.org/wiki/Rubber_duck_debuggin
All I can make of this result is that, for now, some users of chatbots may
not think that chatbots have residual bias inherited from training material,
or that the training protocol has removed or qualified contradictions. (The
latter is likely the case given the nature of the optimization.) I h
Glen -
I appreciate your speaking more directly to these thoughts/ideas than we
have been here. I have been moved by your assertions about vocal
(linguistic?) grooming since you first introduced them. I am recently
finished reading Sopolsky's "Primate's Memoir" which adds another
dimensio
The conversations described by glen as well as those previously posted take
place with 'sanitized' versions of chatbots: i.e., those that have, to a
degree, removed racist/sexist bias, but also entire chunks of subject matter.
Seemingly within seconds of the first releases of chatAIs, users were
DaveW -
I am curious about what that game of "whack a mole" might have looked
like in those early days. I was a laggy enough adopter that I only
noticed a few times when a thread or subject that I'd been indulged in
by GPT (3.5) suddenly became Verboten.
Gemini is *much* more prone to resp
I'm reminded of the technical series of books like _JavaScript: The Good
Parts_.
One could imagine that unaligned LLMs could be valuable as in Minority Report
or for writing addictive video games -- characterize the distribution of
deviant behaviors with high fidelity, while sampling unobserved