On Tue, 13 May 2025 at 10:24, Holger Levsen <hol...@layer-acht.org> wrote:
>
> On Sat, May 10, 2025 at 12:21:26AM +0200, Aigars Mahinovs wrote:
> > I find that thinking to be rather limited. LLM are not self-aware or
> > self-operating entities. There is always a human that uses an LLM.
> > It's their freedom that you are discounting.
>
> this freedom needs to be valued against what it costs. sure i'm free to
> fly my private jet whereever I want, yet it has some costs for everyone.
> same with LLMs.

In terms of money or something else?

This was in response to Russ articulating that: "I don't work on free
software because I want to make
something easier for Google's LLM. I work on free software because I want
to give freedom and control to human beings."

The false assumption here being that making "something easier for
LLMs" will only benefit Google (who are nowhere near top in terms of
AI development, btw) and not "human beings", which quite obviously
fails to take in account any freedom and control that a LLMs *does* in
fact give its users, who are also human beings.

You are saying "freedom needs to be valued against its costs". Sure.
But before, in this discussion, this freedom was not valued *at all*.

> > Moreover - there are *far* more people that can use an LLM to benefit
> > from its gathered knowledge compared to the number of people that have
> > spent decades learning programming like we have. Hating on LLMs hurts
> > the freedom of a lot more people.
>
> citation needed, I could also say: this sounds like a human hallucination to
> me.
>
> LLMs don't work, all companies building them loose enourmous amounts of
> money so far and their best plan how to make them profitable is to make
> LLMs figure out that part. LOL!

Of course LLMs work. I can run one of a bunch of LLMs on my local GPU
and they can actually help me figure out coding tasks. Especially in
programming languages that are not my core competency or some more
tricky corner cases asking a coding-specific LLM produces a great
summary of what the code *actually* does far faster than me reading a
bunch of man pages and doing a bunch of experiments. And it can very
easily and reliably apply common coding patterns adapted to my
surrounding code. At that point I am reviewing and testing and
refining code instead of having to write all of it myself, including
the boring parts. Like pair-programming with a great student, who has
learned all the manuals (but might be a bit slow in understanding what
the purpose of it all is). And that uses a few pennies worth of
electricity at most. It sure is profitable to me.

If you are referencing larger AI companies struggling to recoup their
investments in training and inference ... that would be the first time
someone made an argument to me that something might not be free
software *because* it might be hard to make a profit selling that
software right now :D

But seriously, computer graphics in the past also took months of
rendering and massive amounts of money for very mediocre results. Now
a commodity graphics card can render (in runtime) movie quality
graphics for pennies. It is software, we all know how software evolves
- it gets heavier and bulkier as more features are added, but the
simpler variations (still delivering the same kind of functionality as
top of the line software was delivering a couple of years ago) is
becoming more and more optimized and becomes easy to run on commodity
hardware. There are literally LLMs running locally on phones nowadays.
Quite limited compared to the best models of today, but better than
the first very large models.
-- 
Best regards,
    Aigars Mahinovs

Reply via email to