On 16/05/2025 00:48, Konstantin Kharlamov wrote:
On Fri, 2025-05-16 at 00:29 +0930, Justin Zobel wrote:
On 15/05/2025 23:57, Konstantin Kharlamov wrote:
On Thu, 2025-05-15 at 16:09 +0200, Felix Ernst wrote:

Late reply, but I also wanted to mention that I am 100% in
support of
any anit-AI messaging and policies we might choose.

The wording as linked by Akseli in their first post seems like a
good
starting point in that regard:

Other projects have already done something similar, see for
example:
https://discourse.gnome.org/t/loupe-no-longer-allows-generative-ai-contributions/27327
The only use of AI I support needs all its training data to be
licenced in a way that allows use for the AI training e.g. CC0 or
WTFPL licence. This way I don't see ethical issues because the
copyright holders have then given some sort of consenst for this
use.

I wouldn't even mind if we went one step further and actively
promoted e.g. Plasma as "free of AI". This does not need to be
fully
true, but this would be more of an activism and marketing angle I
would like to see. There is a good chance though that this would
not
be a good use of our time but it would align with KDE Eco IMO. (I
know that there are also great uses of AI, but public messaging
needs
to be clear and easy to understand, and there is still enough
pro-AI
marketing out there to the point that taking the opposite stance
seems sensible to me.)
I'm not sure taking fully opposite stance would be beneficial for
anyone. Pro-AI has a point. And most anti-AI people from my
experience
are actually anti-unlicensed-AI, i.e. not anti-AI in general.
That's
because full "anti-AI" has no benefits, so there's not much people,
who
actually are fully anti-AI. Hence, making public stance "we're all
anti-AI" would be harmful, not only in marketing sense, but also
technologically, because it would require all KDE apps maintainers
to
remove support for AI tools (think of Kate completion plugins for
example), which sounds like a nice way to introduce conflict in the
community.
  Ethical issues aside, AI has other impacts as well, most notably
environmental via the huge amounts of energy required to first train
AI and then even using it.
Maybe it's just me, but I never understood the reasoning behind "AI
consumes too much power". I mean, I am all for ecology, but this
implies improving technologies, not the opposite. Maybe I'm missing the
point, but such reason to me seems equal to "stop using phones and
computers, because they consume energy".

Using a mobile phone cannot be compared to the power used by data centres to train AI in the slightest.

Just straight off the top of my head, your phone and computer automatically go to sleep by default. Servers do not.

It's also a matter of how these data centres are powered. Corporations will often go for the cheapest source which is often dirty non-renewable sources of energy.

The way I see it, AI has three main problems, legal, ethical and environmental.

Justin

Reply via email to