Roger,
Thanks for the info and references.
As the great philosopher Britney Spears once said, "Why do you have a brain
if you're not willing to change it?" So for now, I'll switch my stance from
"pro-emergence-in-LLMs" to "I have no idea what's going on."

On Tue, 9 May 2023 at 04:25, Roger Critchlow <r...@elf.org> wrote:

> This evening's hackernews contribution:
>
>
> https://hai.stanford.edu/news/ais-ostensible-emergent-abilities-are-mirage
>
> AI’s Ostensible Emergent Abilities Are a Mirage
> According to Stanford researchers, large language models are not greater
> than the sum of their parts.
>
> Which is a gloss on https://arxiv.org/abs/2304.15004:
>
> Recent work claims that large language models display emergent abilities,
>> abilities not present in smaller-scale models that are present in
>> larger-scale models. What makes emergent abilities intriguing is two-fold:
>> their sharpness, transitioning seemingly instantaneously from not present
>> to present, and their unpredictability, appearing at seemingly
>> unforeseeable model scales. Here, we present an alternative explanation for
>> emergent abilities: that for a particular task and model family, when
>> analyzing fixed model outputs, one can choose a metric which leads to the
>> inference of an emergent ability or another metric which does not. Thus,
>> our alternative suggests that existing claims of emergent abilities are
>> creations of the researcher's analyses, not fundamental changes in model
>> behavior on specific tasks with scale.
>
>
> -- rec --
>
>
> On Mon, May 8, 2023 at 12:33 AM Pieter Steenekamp <
> piet...@randcontrols.co.za> wrote:
>
>> Sorry, I forgot to add the url of the interview.
>> (580) The Current State of Artificial Intelligence with James Wang -
>> YouTube <https://www.youtube.com/watch?v=V6WL4X6pmCY>
>>
>>
>> On Mon, 8 May 2023 at 08:27, Pieter Steenekamp <
>> piet...@randcontrols.co.za> wrote:
>>
>>> I am very excited about the basic idea that neither Google nor any Big
>>> Tech company has the Moat as per the hackernews reference above.
>>> Very interesting around this is the interview with James Wang of
>>> Cerebras James Wang about this where he makes a strong case (in my view in
>>> any case) that in future open source large language models are going to be
>>> much more prominent than those of Big Tech.
>>>
>>> I quote from the description on Toutube about the interview:
>>> "Scaling laws are as important to artificial intelligence (AI) as the
>>> law of gravity is in the world around us. AI is the empirical science of
>>> this decade, and Cerebras is a company dedicated to turning
>>> state-of-the-art research on large language models (LLMs) into open-source
>>> data that can be reproduced by developers across the world. In this
>>> episode, James Wang, an ARK alum and product marketing specialist at
>>> Cerebras, joins us for a discussion centered around the past and the future
>>> of LLM development and why the generative pre-trained transformer (GPT)
>>> innovation taking place in this field is like nothing that has ever come
>>> before it (and has seemingly limitless possibilities). He also explains the
>>> motivation behind Cerebras’ unique approach and the benefits that their
>>> architecture and models are providing to developers."
>>>
>>> On Mon, 8 May 2023 at 01:09, Merle Lefkoff <merlelefk...@gmail.com>
>>> wrote:
>>>
>>>> Thank you Roger.
>>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>>> FRIAM Applied Complexity Group listserv
>>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>>> https://bit.ly/virtualfriam
>>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>>> archives:  5/2017 thru present
>>>> https://redfish.com/pipermail/friam_redfish.com/
>>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to