Sam Hartman writes:
> Dear lumin:
>
> First, thanks for all your work on AI and free software.
> When I started my own AI explorations, I found your ML policy
> inspirational in how I thought about AI and free software.
I'd like to pile on and repeat this sentiment; thank you, Mo!
> With my De
On 2025-02-10 19:12, Christian Kastner wrote:
> My concern, however, are bad actors.
>
> I often thought that simplest way to "prove" that a free model trained
> on private data cannot really be free is to train one that purposefully
> introduces an undocumented bias such that it creates a
> self-
On Mon, 2025-02-10 at 19:12 +0100, Christian Kastner wrote:
> > Preferred Form of Modification
> > ==
> > [...]
> > As a practical matter, for the non-monopolies in the free software
> > ecosystem, the preferred form of modification for base models is the
> > model thems
Hi all,
On 2025-02-05 15:45, Sam Hartman wrote:
> First, thanks for all your work on AI and free software.
> When I started my own AI explorations, I found your ML policy
> inspirational in how I thought about AI and free software.
Same here -- thanks, Mo!
> I have come to believe that:
>
> 1)
> "Sam" == Sam Johnston writes:
Sam> On Fri, 7 Feb 2025 at 16:04, Stefano Zacchiroli
wrote:
>> I don't think we should focus our conversation on LLMs much, if
>> at all.
Sam> Just as the software vendor doesn't get to tell users what
Sam> constitutes an improvement for
On Fri, 7 Feb 2025, Sam Johnston wrote:
>On Fri, 7 Feb 2025 at 08:48, Thorsten Glaser wrote:
>>
>> I’d like to remind you that these huge binary blobs still contain,
>> in lossily compressed form, illegally obtained and unethically
>> pre-prepared, copies of copyrighted works, whose licences are
On Fri, 7 Feb 2025 at 16:04, Stefano Zacchiroli wrote:
> I don't think we should focus our conversation on LLMs much, if at all.
While I agree LLMs tend to be the tail wagging the dog in AI/ML
discussion, the thread focuses on LLMs and the resulting policy will
apply to them.
> The reason is tha
While I'm still digesting the very impactful (for me) message by the
other Sam (hartmans), a quick but important note on the following:
On Fri, Feb 07, 2025 at 01:35:00PM +0100, Sam Johnston wrote:
> "Large language models (LMs) have been shown to memorize parts of
> their training data, and when
On Fri, 7 Feb 2025 at 08:48, Thorsten Glaser wrote:
>
> I’d like to remindyou that these huge binary blobs still contain,
> in lossily compressed form, illegally obtained and unethically
> pre-prepared, copies of copyrighted works, whose licences are not
> honoured by the proposed implementations.
Sam Hartman wrote:
> TL;DR: I think it is important for Debian to consider AI models free
even if those models are based on models that do not release their
training data. In the terms of the DFSG, I think that a model itself is
often a preferred form of modification for creating derived works.
He
M. Zhou dixit:
>I do not see how proposal A harms the ecosystem. It just prevents huge
>binary blobs from entering Debian's main section of the archive. It
>does not stop people from uploading the binary blobs to non-free
>section.
I’d like to remindyou that these huge binary blobs still contain,
Hi Sam,
Thank you for the input. I see your point, and those are exactly why I
wrote proposal B in my draft. Here is my quick response after going through
the text.
On Wed, 2025-02-05 at 07:45 -0700, Sam Hartman wrote:
>
> TL;DR: I think it is important for Debian to consider AI models free
> ev
TL;DR: I think it is important for Debian to consider AI models free
even if those models are based on models that do not release their
training data. In the terms of the DFSG, I think that a model itself is
often a preferred form of modification for creating derived works. Put
another way, I don'
13 matches
Mail list logo