Le Mon, May 05, 2025 at 01:12:13PM -0600, Sam Hartman a écrit :
> 
> I'm not sure if this is too late. The mail to debian-devel-announce was
> kind of late, and I hope there is still some discussion time left.
> 
> It is late enough that I am immediately seeking seconds for the
> following proposal.
> I am also open to wordsmithing if we have time.
> 
> If we decide to take more time to think about this issue and build
> project consensus, I would be delighted if we did not vote now.
> 
> Rationale:
> 
> TL;DR: If in practice we are able to modify the software we have, and
> the license is DFSG free, then I think we meet DFSG 2 and the software
> should be DFSG free.
> 
> This proposal extends on the comments I made in
> https://lists.debian.org/tsled098ieb....@suchdamage.org
> 
> 
> It's been my experience that given the costs of AI training, often the
> model itself is the preferred form of modification. I find this
> particularly true in the case of LLMs based on my experience over the
> last year.  I particularly disagree with Russ that doing a full
> parameter fine tuning of a model is anything like calling into a
> library; to me it seems a lot more like  modifying a Smalltalk world or
> changing a LambdaMoo world and dumping a new core. Even LORA style
> retraining looks a lot like the sort of patch files permitted by DFSG 4.
> I disagree with those who claim that if we had the original training
> data we would choose to start there when we want to modify a model.

Without the original training data, we have no way to know what it 
is "inside" the model. The model could generate backdoors and non-free
copyrighted material or even more harmful content.

Cheers
-- 
Bill. <ballo...@debian.org>

Imagine a large red swirl here.

Reply via email to