On 25 February 2016 at 18:03, Duncan <1i5t5.dun...@cox.net> wrote:
> Which I am (running from the git repo), and that ability to (as a user,
> easily) actually track all that extra data was one of my own biggest
> reasons for so looking forward to the git switch for so long, and is now
> one of the biggest reason's I'm a /huge/ supporter of the new git repo,
> in spite of the time it took and the imperfections it still has.


I'm considering bolting together some Perl that would allow you to run
a small HTTP service rooted in a git repo dir, and would then generate
given changes files on demand and then cache their results somehow.


Then you could have a "Live changes as a service" where interested
parties could simply do:

 curl http://thing.gentoo.org/changes/dev-lang/perl

and get a changelog spewed out instead of burdening the rsync server
with generating them for every sync.

That way the aggregate CPU Load would be grossly reduced because the
sync server wouldn't have to spend time generating changes for every
update/update window, and it wouldn't have to be full-tree aware.

But thinking about it makes me go "eeeh, thats a lot of effort really"

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL

Reply via email to