On Sun, Dec 12, 2021 at 5:14 AM <[email protected]> wrote:
> On Saturday, December 11, 2021, at 11:41 PM, James Bowery wrote: > > dynamical rather than statistical > > Isn't a dynamic model one that has its neural config settings adapt as it > learns? For example, for a given prompt, if it sees only 3 new completions > 1 thousand times ex. constantly only we went to room, or we went to stairs, > or we went to table, then it learns quickly to predict those 3 things > strongly since it is sure no other type of concept can come next but those > 3. > > Another example is setting lower layers for short matches to predict > higher in the averaging of layers, because lower layers are more learnt, > long sentences are rare. As it learns more, it can use higher layers more, > though, so you set it to either use a formula that with time trusts higher > layers more, or set it to use if it has been predicting good lately using > the higher layer then do so again for next immediate prompt or similar > prompt. > I was referring to the model, not to the learning process that produces the model. See this animation from Nature <https://www.nature.com/articles/s42256-019-0026-3>. Also, my use of the word "dynamical" (which I've been sticking to as a 2x4 that I use to try to whack the mule between the eyes so it starts pulling the Algorithmic Information cart) is one that Chaitin recently said should be abandoned in favor of the word "algorithmic" <https://youtu.be/Mtt6PWt0Kcg>. All I can say is "Amen, Chaitin. But, please, permit me my rhetorical device." ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T22ce813ce07d9b1a-M7e9b9918b91f7d0cac55e782 Delivery options: https://agi.topicbox.com/groups/agi/subscription
