Ah, no, just needed clarification. Thanks for providing it!

My insight into the failure of the industry is a general one: They are
racing ahead into a new tech bubble. There is very little metacognition
happening right now. That will come later, when things have cooled down a
bit and the more rational characters start to take over.

On Tue, Jul 23, 2024 at 2:09 PM James Bowery <jabow...@gmail.com> wrote:

> I directed the question at you because you are likely to understand how
> different training and inference are since you said you "pay my bills by
> training" -- so far from levelling a criticism at you I was hoping you had
> some insight into the failure of the industry to use training benchmarks as
> opposed to inference benchmarks.
>
> Are you saying you don't see the connection between training and
> compression?
>
> On Mon, Jul 22, 2024 at 8:08 PM Aaron Hosford <hosfor...@gmail.com> wrote:
>
>> Sorry, I'm not sure what you're saying. It's not clear to me if this is
>> intended as a criticism of me, or of someone else. Also, I lack the context
>> to draw the connection between what I've said and the topic of
>> compression/decompression, I think.
>>
>> On Mon, Jul 22, 2024 at 5:17 PM James Bowery <jabow...@gmail.com> wrote:
>>
>>>
>>>
>>> On Mon, Jul 22, 2024 at 4:12 PM Aaron Hosford <hosfor...@gmail.com>
>>> wrote:
>>>
>>>> ...
>>>>
>>>> I spend a lot of time with LLMs these days, since I pay my bills by
>>>> training them....
>>>>
>>>
>>> Maybe you could explain why it is that people who get their hands dirty
>>> training LLMs, and are therefore acutely aware of the profound difference
>>> between training and inference (if for no other reason than that training
>>> takes orders of magnitude more resources), seem to think that these
>>> benchmark tests should be on the inference side of things whereas the
>>> Hutter Prize has, *since 2006*, been on the training *and* inference
>>> side of things, because a winner must both train (compress) and infer
>>> (decompress).
>>>
>>> Are the "AI experts" really as oblivious to the obvious as they appear
>>> and if so *why*?
>>>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T6510028eea311a76-M3f44388f09277d0c433374da>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mdeb12de4a8461c4bdcd12996
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to