Would a CV of a large language model contain all the datasets it has seen? As
adaptive agents of our selfish genes we are all trained on slightly different
datasets. A Spanish speaker is a person trained on a Spanish dataset. An
Italian speaker is a trained on an Italian dataset, etc. Speakers of different
languages are trained on different datasets, therefore the same sentence is
easy for a native speaker but impossible to understand for those who do not
know the language. Do all large language models need to be trained on the same
datasets? Or could many large language models be combined to a society of mind
as Marvin Minsky describes it in his book "The society of mind"? Now that they
are able to understand language it seems to be possible that one large language
model replies to the questions from another. And we would even be able to
understand the conversations.-J.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021 http://friam.383.s1.nabble.com/