Dnia 2013-07-21, o godz. 13:42:17
Pacho Ramos <pa...@gentoo.org> napisał(a):

> El sáb, 31-03-2012 a las 17:33 -0700, Zac Medico escribió:
> > On 03/31/2012 04:25 PM, Walter Dnes wrote:
> > > On Sat, Mar 31, 2012 at 10:42:50AM -0700, Zac Medico wrote
> > >> On 03/31/2012 06:34 AM, Pacho Ramos wrote:
> > >>> About the wiki page, I can only document reiserfs+tail usage as it's the
> > >>> one I use and I know, about other alternatives like using squashfs, loop
> > >>> mount... I cannot promise anything as I simply don't know how to set
> > >>> them.
> > >>
> > >> Squashfs is really simple to use:
> > >>
> > >>    mksquashfs /usr/portage portage.squashfs
> > >>    mount -o loop portage.squashfs /usr/portage
> > > 
> > >   Don't the "space-saving filesystems" (squashfs, reiserfs-with-tail,
> > > etc) run more slowly due to their extra finicky steps to save space?  If
> > > you really want to save a gigabyte or 2, run "eclean -d distfiles" and
> > > "localepurge" after every emerge update.  I've also cobbled together my
> > > own "autodepclean" script that check for, and optionally unmerges
> > > unneeded stuff that was pulled in as a dependancy of a package that has
> > > since been removed.
> > 
> > Well, in this case squashfs is more about improving access time than
> > saving space. You end up with the whole tree stored in a mostly
> > contiguous chunk of disk space, which minimizes seek time.
> 
> Would be possible to generate and provide squashed files at the same
> time tarballs with portage tree snapshots are generated? mksquashfs can
> take a lot of resources depending on the machine, but providing the
> squashed images would still benefit people allowing them to download and
> mount them

I'm experimenting with squashfs lately and here's a few notes:

1. I didn't find a good way of generating incremental images with
squashfs itself. I didn't try tools like diffball (those that were used
in emerge-delta-webrsync) but I recall they were very slow (you'd have
to use 56K modem to get them faster than rsync) and I doubt they'll fit
squashfs specifics.

2. squashfs is best used with union filesystem like aufs3. However,
that basically requires patching the kernel since FUSE-based union
filesystems simply don't work.

a) unionfs-fuse doesn't support replacing files from read-only branch,

b) funinonfs gets broken with rsync somehow.

I haven't tested le ol' unionfs, but aufs3 I get working great.

3. squashfs+aufs3 really benefits from '--omit-dir-times' rsync option.
Otherwise, it recreates the whole directory structure on each rsync.
This also causes much less output. We should think about making this
the default.

4. 'emerge --sync' is ultra-fast with this combo. very big sync goes
in less than a minute.

5. I have doubts about 'emerge -1vDtu @world' speed. It is very
subjective feeling but I feel like reiserfs was actually faster in this
regard. However, space savings would surely benefit our users.

6. if we're to do squahfs+aufs3, we need a clean dir structure for all
of it, including squashfs files, intermediate mounts and r/w branches.

7. we could probably get incremential squashfs+aufs3 through squashing
old r/w branches and adding new ones on top of them. But considering
the 'emerge --sync' speed gain, I don't know if this is really worth
the effort, and if increase in branches wouldn't make it slow.

-- 
Best regards,
Michał Górny

Attachment: signature.asc
Description: PGP signature

Reply via email to