Re: Reviving lightdm-kde-greeter upstream

2023-11-30 Thread Ben Cooksley
On Thu, Nov 30, 2023 at 12:06 AM Albert Astals Cid  wrote:

> El dimarts, 28 de novembre de 2023, a les 11:25:42 (CET), Ben Cooksley va
> escriure:
> > On Tue, Nov 28, 2023 at 9:51 PM Anton Golubev 
> >
> > wrote:
> > > On 11/25/23 02:14, Albert Astals Cid wrote:
> > > > Since you don't seem to be a KDE devel just yet this would probably
> have
> > >
> > > to go
> > >
> > > > through the https://community.kde.org/Incubator program.
> > >
> > > The instructions on the link say that I need to create a new project on
> > > invent.kde.org, but it already exists[3]? what should I do? (A quick
> > > search on the list did not yield any examples)
> >
> > This is the first time that the revival of a project that has been
> archived
> > on invent.kde.org has taken place, so we're in new territory.
> > The whole point of archival of these projects though is to allow for them
> > to be restored later on.
> >
> > To proceed here, we first will need to get you a developer account - this
> > can be handled under the Incubator program, so you'll need everything
> that
> > goes along with that.
> >
> > From there, we can reinstate the unmaintained project and transfer it to
> an
> > appropriate namespace (likely Plasma given this is a workspace component,
> > and would be consistent given the SDDM KCM lives there as well).
>
> Not sure if Plasma would make sense here unless the Plasma team actually
> wants
> it, since AFAIU everything under Plasma gets released on Plasma releases,
> which may not be what is wanted here.
>

There are definitely repositories under Plasma that the Plasma team doesn't
release.

The group Plasma on invent more refers to our Workspace products, which a
LightDM greeter would fall into the realm of.
Unless someone has a better idea of where to put it?


> Anyhow, do we have someone that wants to help Incubate this?
>
> Cheers,
>   Albert
>

Cheers,
Ben


>
> > That will have to be done by a Sysadmin so please file a ticket once you
> > have a developer account. That will also allow us to facilitate catching
> up
> > the original KDE repository with the subsequent contributions that
> happened
> > in your Gitlab.com repository.
> >
> > Cheers,
> > Ben
> >
> > > > [1]: https://git.altlinux.org/gears/l/lightdm-kde-greeter.git
> > > > [2]: https://gitlab.com/golubevan/lightdm-kde-greeter
> > > > [3]: https://invent.kde.org/unmaintained/lightd
>
>
>
>
>


Interest in building an LLM frontend for KDE

2023-11-30 Thread Loren Burkholder
Howdy, everyone!

You are all undoubtedly aware of the buzz around LLMs for the past year. Of 
course, there are many opinions on LLMs, ranging from "AI is the future/endgame 
for web search or programming or even running your OS" to "AI should be avoided 
like the plague because it hallucinates and isn't fundamentally intelligent" to 
"AI is evil because it was trained on massive datasets that were scraped 
without permission and regurgitates that data without a license". I personally 
am of the opinion that while output from LLMs should be taken with a grain of 
salt and cross-examined against trustworthy sources, they can be quite useful 
for tasks like programming.

KDE obviously is not out to sell cloud services; that's why going to 
https://kde.org doesn't show you a banner "Special offer! Get 1 TB of cloud 
storage for $25 per month!" Therefore, I'm *not* here to talk about hosting a 
(paywalled) cloud LLM. However, I do think that it is worthwhile opening 
discussion about a KDE-built LLM frontend app for local, self-hosted, or 
third-party-hosted models.

>From a technical standpoint, such an app would be fairly easy to implement. It 
>could rely on Ollama[0] (or llama.cpp[1], although llama.cpp isn't focused on 
>a server mode) to host the actual LLM; either of those backends support a wide 
>variety of hardware (including running on CPU; no fancy GPU required), as well 
>as many open-source LLM models like Llama 2. Additionally, using Ollama could 
>allow users to easily interact with remote Ollama instances, making this an 
>appealing path for users who wished to offload LLM work to a home server or 
>even offload from a laptop to a more powerful desktop.

>From an ideological standpoint, things get a little more nuanced. Does KDE 
>condone or condemn the abstract concept of an LLM? What about actual models we 
>have available (i.e. are there no models today that were trained in a way we 
>view as morally OK)? Should we limit support to open models like Llama 2 or 
>would we be OK with adding API support for proprietary models like GPT-4? 
>Should we be joining the mainstream push to put AI into everything or should 
>we stand apart and let Microsoft have its fun focusing on AI instead of 
>potentially more useful features? I don't recall seeing any discussion about 
>this before (at least not here), so I think those are all questions that 
>should be fairly considered before development on a KDE LLM frontend begins.

I think it's also worth pointing out that while we can sit behind our screens 
and spout out our ideals about AI, there are many users who aren't really 
concerned about that and just like having a chatbot that responds in what at 
least appears to be an intelligent manner about whatever they ask it. I have 
personally made use of AI while programming to help me understand APIs, and I'm 
sure that other people here have also had positive experiences with AI and plan 
to continue using it.

I fully understand that by sending this email I will likely be setting off a 
firestorm of arguments about the morality of AI, but I'd like to remind 
everyone to (obviously) keep it civil. And for the record, if public opinion 
comes down in favor of building a client, I will happily assume the 
responsibility of kicking off and potentially maintaining development of said 
client.

Cheers,
Loren Burkholder

P.S. If development of such an app goes through, you can get internet points by 
adding support for Stable Diffusion and/or DALL-E :)

[0]: https://github.com/jmorganca/ollama
[1]: https://github.com/ggerganov/llama.cpp

signature.asc
Description: This is a digitally signed message part.


Re: Interest in building an LLM frontend for KDE

2023-11-30 Thread Andre Heinecke
Hi.

On Friday, 01 December 2023 03:53:22 CET Loren Burkholder wrote:
> they can be quite useful for tasks like programming.

I need it desperately for intelligent spellchecking / grammar fixes 😅

> From a technical standpoint, such an app would be fairly easy to implement.
> It could rely on Ollama[0] (or llama.cpp[1], although llama.cpp isn't
> focused on a server mode) to host the actual LLM; either of those backends
> support a wide variety of hardware (including running on CPU; no fancy GPU
> required), as well as many open-source LLM models like Llama 2.
> Additionally, using Ollama could allow users to easily interact with remote
> Ollama instances, making this an appealing path for users who wished to
> offload LLM work to a home server or even offload from a laptop to a more
> powerful desktop.

I played around with Gpt4all a bit and liked it very much. Especially if I 
could alternatively put in my OpenAI API key in a generalized fronted. Since 
Hardware will only get better some local solutions for some easy tasks might 
also make sense. I can totally  see a use for a generalized KDE fronted or 
even Frameworks API to interact with LLMs 

> From an ideological standpoint, things get a little more nuanced. Does KDE
> condone or condemn the abstract concept of an LLM? What about actual models
> we have available (i.e. are there no models today that were trained in a way
> we view as morally OK)? Should we limit support to open models like Llama 2
> or would we be OK with adding API support for proprietary models like GPT-4?

Please leave ideology out of it, we are doing Free Software. So if you 
"Ideolically" do not want to have something I have the freedom to come with my 
different ideology and just do it anyway. If you want to really work on 
something like that and not just start some Academic discussion keep things as 
generalized and backend agnostic as much as possible IMO.

> Should we be joining the mainstream push to put AI into everything or should
> we stand apart and let Microsoft have its fun focusing on AI instead of
> potentially more useful features? I don't recall seeing any discussion about
> this before (at least not here), so I think those are all questions that
> should be fairly considered before development on a KDE LLM frontend begins.

I don't think so. I have the slight feeling that you want to start an abstract 
discussion here and then magically the "KDE Community" will develop something. 
Just do it or don't. It will always be in the users freedom to use it or not. 
I would love to have an optional KMail plugin that interacts with an LLM. 
Others might not 🤷🏻‍♂

> I fully understand that by sending this email I will likely be setting off a
> firestorm of arguments about the morality of AI, but I'd like to remind
> everyone to (obviously) keep it civil. And for the record, if public opinion
> comes down in favor of building a client, I will happily assume the
> responsibility of kicking off and potentially maintaining development of said
> client.

I don't really see why this should kick of a firestorm of arguments. It's all 
about freedom. Its not like you are proposing to forcefully feed all the users 
data in a remote LLM as a requirement to get Plasma to start.

Start a project on invent, create something useful, and then we see where it 
goes how many users it will find. How well it integrates. I would happily join 
you and I am very interested in this. A simple first useful prototype for me 
would be to have KMail Messagecomposer integration where it could help me 
write mails, just like ELOPe 😀

I am currently working on a prototype to combine  https://invent.kde.org/
schwarzer/klash/ with a local LibreTranslate to at least create fuzzy 
translations for po files and do some trivial translation tasks automatically. 
I think this is slightly related. 


Best Regards,
Andre

-- 
GnuPG.com - a brand of g10 Code, the GnuPG experts.

g10 Code GmbH, Erkrath/Germany, AG Wuppertal HRB14459
GF Werner Koch, USt-Id DE215605608, www.g10code.com.

GnuPG e.V., Rochusstr. 44, D-40479 Düsseldorf.  VR 11482 Düsseldorf
Vorstand: W.Koch, B.Reiter, A.HeineckeMail: bo...@gnupg.org
Finanzamt D-Altstadt, St-Nr: 103/5923/1779.   Tel: +49-211-28010702

signature.asc
Description: This is a digitally signed message part.


Re: Interest in building an LLM frontend for KDE

2023-11-30 Thread Ethan Barry
On Thursday, November 30th, 2023 at 8:53 PM, Loren Burkholder 
 wrote:
> 
> 
> Howdy, everyone!
> 
> You are all undoubtedly aware of the buzz around LLMs for the past year. Of 
> course, there are many opinions on LLMs, ranging from "AI is the 
> future/endgame for web search or programming or even running your OS" to "AI 
> should be avoided like the plague because it hallucinates and isn't 
> fundamentally intelligent" to "AI is evil because it was trained on massive 
> datasets that were scraped without permission and regurgitates that data 
> without a license". I personally am of the opinion that while output from 
> LLMs should be taken with a grain of salt and cross-examined against 
> trustworthy sources, they can be quite useful for tasks like programming.
> 
> KDE obviously is not out to sell cloud services; that's why going to 
> https://kde.org doesn't show you a banner "Special offer! Get 1 TB of cloud 
> storage for $25 per month!" Therefore, I'm not here to talk about hosting a 
> (paywalled) cloud LLM. However, I do think that it is worthwhile opening 
> discussion about a KDE-built LLM frontend app for local, self-hosted, or 
> third-party-hosted models.
> 
> From a technical standpoint, such an app would be fairly easy to implement. 
> It could rely on Ollama[0] (or llama.cpp[1], although llama.cpp isn't focused 
> on a server mode) to host the actual LLM; either of those backends support a 
> wide variety of hardware (including running on CPU; no fancy GPU required), 
> as well as many open-source LLM models like Llama 2. Additionally, using 
> Ollama could allow users to easily interact with remote Ollama instances, 
> making this an appealing path for users who wished to offload LLM work to a 
> home server or even offload from a laptop to a more powerful desktop.
> 
> From an ideological standpoint, things get a little more nuanced. Does KDE 
> condone or condemn the abstract concept of an LLM? What about actual models 
> we have available (i.e. are there no models today that were trained in a way 
> we view as morally OK)? Should we limit support to open models like Llama 2 
> or would we be OK with adding API support for proprietary models like GPT-4? 
> Should we be joining the mainstream push to put AI into everything or should 
> we stand apart and let Microsoft have its fun focusing on AI instead of 
> potentially more useful features? I don't recall seeing any discussion about 
> this before (at least not here), so I think those are all questions that 
> should be fairly considered before development on a KDE LLM frontend begins.
> 
> I think it's also worth pointing out that while we can sit behind our screens 
> and spout out our ideals about AI, there are many users who aren't really 
> concerned about that and just like having a chatbot that responds in what at 
> least appears to be an intelligent manner about whatever they ask it. I have 
> personally made use of AI while programming to help me understand APIs, and 
> I'm sure that other people here have also had positive experiences with AI 
> and plan to continue using it.
> 
> I fully understand that by sending this email I will likely be setting off a 
> firestorm of arguments about the morality of AI, but I'd like to remind 
> everyone to (obviously) keep it civil. And for the record, if public opinion 
> comes down in favor of building a client, I will happily assume the 
> responsibility of kicking off and potentially maintaining development of said 
> client.
> 
> Cheers,
> Loren Burkholder
> 
> P.S. If development of such an app goes through, you can get internet points 
> by adding support for Stable Diffusion and/or DALL-E :)
> 
> [0]: https://github.com/jmorganca/ollama
> [1]: https://github.com/ggerganov/llama.cpp


I am anti-LLM on the grounds that the training sets were created without the 
original authors' consent. I see no issue with a libre/ethical LLM, if there is 
one, though. If a developer or team of developers wants to implement a Qt and 
KDE-integrated LLM app, I have no problem with that, but I believe KDE as an 
organization should probably steer clear of such a thorny subject. It's sure to 
upset a lot of users no matter what position is taken. On the other hand, for 
those people who do make use of AI tools, a native interface would be nice, 
especially one as feature-ful as you're describing...

Regards,

Ethan B.