Oh Google has already created a "mixture of experts" architecture.
Interesting.https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.htmlThe
amount of data they use to train and implement large language models is
mind-boggling. I am curious what Google and OpenAI will present
It depends if it is given boundaries between the datasets. Is it learning one
distribution or two?
From: Friam On Behalf Of Jochen Fromm
Sent: Sunday, February 5, 2023 4:38 AM
To: The Friday Morning Applied Complexity Coffee Group
Subject: [FRIAM] Datasets as Experience
Would a CV of a large